Task #2252
closedUpdate jenkins configs to publish results of unit tests in XML format (xUnit)
100%
Description
There are several parts of this task:
update
.jenkins.d/20-tests.sh
to generate output in XML format whenXUNIT
environment variable is set (similar to script in ndn-cxx)update
tests/wscript
:- make sure that main.cpp is compiled separately for core, daemon, rib
each compilation defines different
BOOST_TEST_MODULE
defines=['BOOST_TEST_MODULE=NFD Core Tests'], defines=['BOOST_TEST_MODULE=NFD Daemon Tests'], defines=['BOOST_TEST_MODULE=NFD RIB Tests'],
update Jenkins configuration to run
XUNIT=true ./.jenkins
and then publish xUnit reports
Updated by Junxiao Shi about 10 years ago
- Description updated (diff)
- Category changed from Integration Tests to Build
- Start date deleted (
12/01/2014)
Updated by Junxiao Shi almost 10 years ago
One problem, as exposed by Build 2570, is:
- stdout and stderr from Boost.Test are captured by xUnit.
- NFD_LOG_* lines still appear in Console Output page.
This separation makes it hard to correlate logs with test failures, because there's no indicate of boundary between two tests.
I disagree with this Task unless there's a solution to this problem.
- xUnit could be in addition to regular console output.
- or, logs shall be captured by xUnit and displayed together with stderr of each test case.
Updated by Alex Afanasyev almost 10 years ago
We could simply run test cases twice.
I'll look into Boost.Test to avoid this, though I don't see problems running twice.
Updated by Junxiao Shi almost 10 years ago
Updated by Alex Afanasyev almost 10 years ago
I would actually argue that for the cited issues, it is more beneficial to run test cases multiple times. The error always can be discovered, it would just take more effort when the error is intermittent and appear only in one of the runs/outputs.
Updated by Alex Afanasyev almost 10 years ago
- Assignee set to Alex Afanasyev
- % Done changed from 0 to 50
I'm changing my mind a little bit. Unit tests in CI system suppose to catch/highlight the problems. Finding the failed unit test in console output is extremely inefficient. NFD logs in unit tests are useful for developer to dig out the problem exactly and should not really affect how the problem is highlighted in CI system.
In any case. I don't think it is possible to have multiple output formats for boost tests and I'm not planning to enable running test cases multiple times. I'm planning to proceed with this task as it is defined.
Updated by Junxiao Shi almost 10 years ago
Finding the failed unit test in console output is extremely inefficient.
All it takes is CTRL+F and type in "error in".
xUnit requires one click per failed test case, which is far more inefficient than console.
Updated by Alex Afanasyev almost 10 years ago
Search for what? I was using fail, error. Both are ambiguous and in many cases I have to do it multiple times to actually see which cases failed. Also, when I'm looking on the phone, I have no search and I basically eithet manually look in tons of the output for the error or just give up.
xUnit gives a summary of failed cases. Number of clicks to see what's going on is about the same or less.
Updated by Junxiao Shi almost 10 years ago
Search for precisely error in. This captures all test assertions errors including fatal and non-fatal.
Both Android and Windows Phone have searching capability. If iPhone doesn't have it, it's bad.
Updated by Anonymous almost 10 years ago
Safari on iOS' find functionality is built into the address bar. Enter something like "error in" and then scroll down to "On This Page (n matches)" on the suggestions pop up.
Updated by Junxiao Shi almost 10 years ago
In any case. I don't think it is possible to have multiple output formats for boost tests and I'm not planning to enable running test cases multiple times.
See http://stackoverflow.com/a/26718189/3729203 on how to output to both console and XML.
One benefit mentioned in this question is that:
the in-progress log output (in HRF) is highly valuable while the tests are running on the server, to detect hanging tests or to a quick manual check of where the tests currently are
Updated by Junxiao Shi about 9 years ago
Search for precisely error in. This captures all test assertions errors including fatal and non-fatal.
Update:
Some of the slaves are producing error: in keyword in case of a test failure.
The difference is possibly due to Boost version.
Search for this as well.
Updated by Alex Afanasyev about 9 years ago
- Status changed from New to Code review
- % Done changed from 50 to 100
Updated by Alex Afanasyev about 9 years ago
http://gerrit.named-data.net/#/c/2621/ implements note-12 suggestion. However, right now code is placed under tests/ folder of ndn-cxx, therefore not directly usable for other projects.
Can you make a suggestion on how to deal with it? Duplicate code in other projects, move seemingly unrelated code into ndn-cxx, or add a new header-only dependency to all projects (ndn-boost-extensions or something like that)?
Updated by Davide Pesavento about 9 years ago
Alex Afanasyev wrote:
Can you make a suggestion on how to deal with it? Duplicate code in other projects, move seemingly unrelated code into ndn-cxx, or add a new header-only dependency to all projects (ndn-boost-extensions or something like that)?
I have no strong preference either way.
Putting this code in a header installed by ndn-cxx seems ugly, but has the advantage that everything will be in one place, and when you modify that code all other projects will automatically get the changes. At the same time this could be a disadvantage because it creates a dependency (i.e. we need to be careful about backwards compatibility), e.g. if a test in NFD requires changes in the common code, or if a change in the common code requires cascading changes in other projects... although this seems unlikely to happen in practice.
I don't see any major differences between duplicating the code in each project and using an external repo as a git submodule... the latter might seem cleaner, but in fact it's not easier to maintain because you still have to update the submodule for each project.
What is the expected frequency with which this code will have to be updated/changed/fixed? If it's low, then simply copying the code is not such a bad approach. And we're already doing it for waf-tools.
Updated by Alex Afanasyev about 9 years ago
The only reason for update (I think) would be API changes in boost libraries. I'm hoping this wouldn't happen too often, so the frequency of updates should not be high.
Based on Davide's comments, I'm leaning towards keeping extension where it is (in tests/) and duplicate code in other projects. The only place where I plan to do soon is NFD.
Junxiao, would you agree with this plan?
Updated by Alex Afanasyev about 9 years ago
- Target version changed from v0.3 to v0.4
Updated by Alex Afanasyev about 9 years ago
- Status changed from Code review to In Progress
Updated by Alex Afanasyev about 9 years ago
- Status changed from In Progress to Code review
Updated by Alex Afanasyev about 9 years ago
- Status changed from Code review to Closed