25

I have noticed that even though I have a lot of doctests in our Python code, when I trace the testing using the methods described here:

traceit

I find that there are certain lines of code that are never executed. I currently sift through the traceit logs to identify blocks of code that are never run, and then try to come up with different test cases to exercise these particular blocks. As you can imagine, this is very time-consuming and I was wondering if we are going about this the wrong way and whether you all have other advice or suggestions to deal with this problem, which I'm sure must be common as software becomes sufficiently complex.

Community
  • 1
  • 1
reckoner
  • 2,861
  • 3
  • 33
  • 43

2 Answers2

30

coverage.py is a very handy tool. Among other things, it provides branch coverage.

Hank Gay
  • 70,339
  • 36
  • 160
  • 222
  • 19
    This answer would be more beneficial if you provided a short example of how to use `coverage.py`. – SimplyKnownAsG Apr 10 '15 at 15:34
  • 6
    @SimplyKnownAsG The linked page has a Quick Start section front and center, and includes sample usage. Rather than copy-and-paste documentation that is subject to change as new versions come out, I find it's better to just link. – Hank Gay Apr 10 '15 at 19:17
  • 1
    How to use `coverage.py`: https://github.com/audreyr/how-to/blob/master/python/use_coverage_with_unittest.rst – Dušan Maďar Mar 14 '17 at 15:58
19

Do you have a mandate from management to be dogmatic about obtaining 100% code coverage with your test cases? If not, do you believe touching every line of code is the most effective way to find bugs in your code? Assuming you don't have infinite time and people resources, you should probably focus on reasonably testing all of your non trivial code with emphasis on the parts that the developers know were tricky to write or error prone.

Although code coverage is great because you surely can't say a piece of code is tested until it has been touched, I just don't equate touching a piece of code to calling it tested. I'm not against code coverage, but it's too easy to fall into using code coverage as the metric to know when testing is complete. I think that would be a mistake.

Brad Barker
  • 2,053
  • 3
  • 17
  • 28
  • 8
    This is an excellent comment. In my situation, we have scientists, not programmers, writing the Python code. As a result, even though the scientists are very smart, the code is very poorly architected. This means final integration and testing is a nightmare, and we have to work too hard to uncover serious problems during this phase. I'm trying to get them to write better test cases for the code each is responsible for and I'm planning to use code coverage as a way of qualifying the testing they integrate. I can understand that not 100% of the code needs to be touched, but it would help. – reckoner Jul 23 '10 at 23:40
  • Having 100% coverage is no guarantee of sufficient tests, but not having it is typically indicative of not having sufficient tests. Let's also not forget that the coverage module allows for comments such as `# pragma: no cover` and `# pragma: no branch`. – Asclepius Mar 30 '17 at 14:02