3

We have a large body of C/C++ code that is cross-compiled for an embedded Linux target. We've recently begun implementing unit tests (using gmock/gtest) that are run on our development server (which is Linux as well). The unit tests are executed automatically when check-ins are detected (we're using Bamboo).

We're using gcov and lcov to analyze & report code coverage during those unit tests, which has worked out fairly well. However, given that we didn't start out unit testing, a large portion of our codebase is not covered by unit tests. An interesting metric beyond "what is the unit test coverage for those files being unit tested" is "how much of our entire codebase is being covered by unit tests", which includes those files not currently being unit tested. With gcov, you need to actually compile & link a given source file and then execute the resulting program to get the possible coverage data for that file.

In an attempt to develop this "codebase" coverage metric, I wrote some Python scripts to leverage RSM from MSquared (we already had it on our dev server) to evaluate the code and then pair that data with the coverage data returned by gcov. It works fairly well, but in comparing the tallies of things like statements and branches for files evaluated by both RSM and gcov, there were significant enough differences that I didn't feel great about that as a final solution.

Here are my questions:

  1. Has anyone else tried to do something along these lines?
  2. Is there a better way to go about it?
  3. Are there any tools out there (preferably free/open source) that evaluate code similarly to gcov and could be used to perform this static coverage-base analysis?

Thank you.

Curt C
  • 31
  • 1

0 Answers0