14

I'm working on a project that has a lot of legacy code that is not covered with tests.

Is there any way that I could set up the integration server to check that all new commits have a minimum amount of tests (say, coverage is >70%)?

Essentially, I see two options:

  1. Somehow set up the CI server to fail the build when the committed changes are not covered with unit tests. This will ensure that every piece of new code will have tests and that tests for the legacy code will increase with each change.
  2. Set a coverage threshold for the whole project and fail the build if the coverage percentage decreases after a commit. The problem with this is that if I delete a class containing 100 instructions and add a new class with 50 instructions the coverage percentage will go up without me writing any tests.

I like option 1 more because it forces changes in legacy code to be unit tested. This should increase the overall test coverage.

Right now we're using Jenkins as our CI server and JaCoCo for test coverage. Maven is used for building the project and SVN is our main source control.

Denis Rosca
  • 3,409
  • 19
  • 38
  • Keep in mind that 100% coverage is not necessarily possible or even desired. Also coverage numbers can be manipulated; writing a unit test for a test class can artificially inflate your test coverage. – Rylander Apr 03 '13 at 18:51
  • @MikeRylander I know that, I don't even dream of 100% coverage on this project. But I still think that forcing new changes to have at least some coverage is good. – Denis Rosca Apr 03 '13 at 18:54
  • 2
    I'm currently working on a solution to this problem that largely addresses @MikeRylanders comment. http://pitest.org is now integrated with version control. The next release will allow files to be analyzed by scm status. The following release will allow analysis by date range or commit, which would allow a build server to check that modified code met a given mutation score. – henry Apr 19 '13 at 16:13
  • @henry Thanks for the link. It sounds awesome, I'll give it a try. Never heard of mutation testing until now. Because of you, I'll spend my next few days reading wikis, faqs and hot-to-s. Thanks ;) – Denis Rosca Apr 23 '13 at 16:36
  • @DenisRosca Mutation testing is indeed awesome - be warned it is pretty cpu intensive though so you need to think carefully about how you use it. – henry Apr 23 '13 at 19:40

4 Answers4

1

I know you can configure Jenkins to verify that there is at least one test file as part of the commit. That would not assure good test coverage, but at least you would know there was some kind of test related changes.

Rylander
  • 19,449
  • 25
  • 93
  • 144
  • I don't like this idea. If you are committing something to make a previously failing test pass, you shouldn't need to also commit a test file. – JohnnyO Apr 03 '13 at 18:54
  • @JohnnyO In theory a failing build **should not** be committed in the first place. Unless your unit tests are flaky or tied to outside dependencies, which is a different set of issues, you should never have to make a commit to fix broken unit tests. – Rylander Apr 03 '13 at 19:33
  • In theory, sure. In practice, not so much. Imagine you had a bad commit (let's say you forgot to include one file). So, the test passed locally, but failed in Jenkins. How would you commit the missing file in that case? – JohnnyO Apr 04 '13 at 12:44
  • @JohnnyO You can add Force/Override logic into your commit hooks. This method provides rapid feedback for adding tests,but users will always find someway to ignore warnings like this if they really want to. – Rylander Apr 05 '13 at 14:35
0

For option 2, you can use the Jenkins JaCoCo plugin to track the code coverage for each build and set the build result to passed or failed depending on the coverage metrics.

I like option 1 better, too, but I don't know of a built-in way for Jenkins to do this. It should be fairly easy (at least at the class level) to post-process the coverage data and combine it with the SVN revision info, something like:

  1. Parse the JaCoCo output files and find classes that have 0% coverage
  2. Get the file(s) that changed for this build from the SVN revision details (Jenkins makes the revision numbers available in environment variables, SVN_REVISION if there is only one for this build or SVN_REVISION_1, SVN_REVISION_2, ... for multiple)
  3. Print an error message if any of the changed classes have 0% coverage
  4. Use the Jenkins Text Finder plugin to fail the build if the error message is printed.

This isn't a full solution, it gets trickier for new methods or lines that aren't covered by tests. Gives me an idea for a new Jenkins plugin ;-)

gareth_bowles
  • 20,760
  • 5
  • 52
  • 82
0

Some coverage tools (like cobertura) support excluding packages. This way, you can exclude all the old code (assuming it can be pattern matched) and have cobertura check only new code (which covers new commits).

I hope this helps.

Eldad Assis
  • 10,464
  • 11
  • 52
  • 78
  • thanks for your input, but my goal was to ensure that new code is tested and also forcing changes to legacy code to be tested as well. – Denis Rosca Apr 03 '13 at 18:56
0

I have built a tool which does exactly this

https://github.com/exussum12/coverageChecker

You pass in the diff of the branch and the cover test output from the tests. The tool works out which lines in the diff are also in the clover file. and fails the build if less than a certain percentage

to use

bin/diffFilter --phpunit diff.txt clover.xml 70

to fail builds when less than 70% of the diff is covered by a test

I can add other formats if needed

Edit

I have added jacoco

Bin/diffFilter --jacoco diff.txt jacoco.xml
exussum
  • 18,275
  • 8
  • 32
  • 65