2

I've been getting requests lately from management to create reports of the number of assertions run by the tests for our software. They want this so they can tell if people are writing tests or not. My inclination is to just tell them "no you can't have that because you don't need it" but that doesn't seem to satisfy them.

Part of the problem is that our teams are writing long test cases with lots of assertions and they want to say they've tested some new feature because they've added more assertions to an existing test case.

So my question is: Does anyone have some good, authoritative (as much as it really can be), resources or articles or books even that describe how testing should be split into test cases or why counting assertions is bad?

I mean counting assertions or assertions per test as a measurement of if people are right tests is about as useful as counting lines of code per test. But they just don't buy it. I tried searching with Google but the problem is no one bothers to count assertions, so I can't really say "this is why it's a bad idea".

ZombieDev
  • 959
  • 1
  • 10
  • 25
  • 3
    Just my opinion: you will find coverage a much more useful statistic. I could write a thousand assertions and only test one line of code. – Carl Manaster Feb 13 '14 at 19:28
  • Sure, and we don't measure coverage at the moment (we would like to, but without getting into too much, the language the code is written in makes that very difficult). I'm not looking for "coverage is a better metric", as much as "why is counting assertions a bad metric". – ZombieDev Feb 13 '14 at 19:33
  • A colleague pointed out this page in the junit docs which I somehow missed in my searching: http://junit.sourceforge.net/doc/faq/faq.htm#tests_12 I'm not sure if that's enough to count as an answer to this question. – ZombieDev Feb 13 '14 at 19:35
  • Why not demo the test at a regular time interval (end of sprint, before release, end of month....), then they know if you write test or not. But to me it sounds like micro management gone wrong. They should be interested in quality not the tool (unit testing) you use to improve the quality. I can recommend the pragmatic unit testing books found on (http://pragprog.com/). – Jocke Feb 17 '14 at 12:20
  • A bit of further investigation into the problem revealed that the in-house test framework the team wrote doesn't stop on a failed assertion, which is crazy. It sort of explains why they try to cram so many assertions in one test method, because they don't mind if a few fail and figure they're still testing "something". But these tests are so fragile and when one fails it's very difficult to tell why. -_- – ZombieDev Feb 17 '14 at 21:31
  • @Jocke, They do some sort of end-of-sprint demo although I'm not sure what exactly they're demoing, probably not running their tests. The request came from their "scrum master" (really more of a project manager) who wanted something to report back to his bosses. He couldn't report that they've been adding test cases because they've had about 100 test (automated and reported via Jenkins) for months. But the devs say they're adding more assertions so the manager wants to report that. I say that's not how to write tests, he responds, "says who?" – ZombieDev Feb 17 '14 at 23:20
  • @ZombieDev OK I hear your paint. Good luck! – Jocke Feb 18 '14 at 10:29

4 Answers4

2

The imagination to take stupid decisions in software management really have no limits, counting assertions??...the problem with testing usually its a quality problem not a quantity problem.

If you want a respected reference, Gerard Meszaros xUnits patterns its perhaps one of the most respected, one of the recommendations its "Verify one Condition per test" (http://books.google.es/books?id=-izOiCEIABQC&lpg=PT111&ots=YIeYejY-mx&dq=meszaros%20one%20assertion%20per%20test&hl=es&pg=PT110#v=onepage&q=condition&f=false)

But... if the problem its that the people its adding "new test scenarios" extending existing test with "more assertions" instead of writing new test, the best your company can do its buying a lot of copies of meszaros book (and kent beck TDD by example, and growing object oriented guided by test) and hiring some experts to give training and guidance before its to late.

AlfredoCasado
  • 818
  • 5
  • 7
  • I'm going to mark this as the answer for now because I like the explanation in that google book you referenced. Also you mentioned Kent Beck and people here do respect him as an authority on TDD. In his book need the middle of page 125 when talking about test isolation he says: "If I had one test broken, I wanted one problem." Counting assertions isn't really the problem, the problem is giant test methods caused by just adding more stuff (assertions) it existing tests rather than making new tests. – ZombieDev Feb 17 '14 at 23:13
1

Perhaps the Agile Manifesto says it best:

Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.

If you try to run a project by metrics, you end up getting whatever you measure, e.g., lots of assertions that don't actually test the right things.

Or from a more general management perspective: http://hbr.org/2010/06/column-you-are-what-you-measure/ar/1

Mike Stockdale
  • 5,256
  • 3
  • 29
  • 33
1

The book Code Complete by Steve McConnel covers code and tests quality aspects, including metrics related to your case.

Metrics should stimulate desirable behaviour. Desirable behaviour, in your case, is writing more good tests. So, try to explain to your managers that counting assertions is not linked with desirable behaviour. It actually can cause undesirable behaviour, mentioned in the other answer here.

I agree with the point from Agile Manifesto. However, it can be applied successfully in healthy environment. I observed cases when some engineers refused writing unit tests, because they believed they were "successful" without them last 20 years or so. In this case, it does not matter how much you trust them to get job done. Metrics change behaviour. They generate bias-free data for better decision making. They are useful, but if they are right metrics.

Good luck!

Andrew
  • 2,055
  • 2
  • 20
  • 27
1

Perhaps the root cause of this is the lack of visibility caused by long test methods. Really there should be one logical assertion written per test (This can be a group of assertions, but should really be as few as possible to test the given scenario), anything more and the test is less readable and it harder for someone to understand what its actually testing. These are easier to maintain too, and will also be easier to change when the system under test changes. Long test methods also tend to be very fragile as they cover too much of the systems behaviour and can require changing every time anything changes.

I can't find an example off hand, but a couple of good resources are Kent Beck and Mark Seemann. This is very relevant to this question.

Metrics themselves can always be tricked, you can get 100% code coverage, with loads of assertions and not actually test anything, you will likely get more value at least initially from cleaning up the tests.

Community
  • 1
  • 1
gmn
  • 4,199
  • 4
  • 24
  • 46
  • I really like the idea stated in that [programmers.stackexchange.com question](http://programmers.stackexchange.com/questions/7823/is-it-ok-to-have-multiple-asserts-in-a-single-unit-test) you linked to: Tests should fail for exactly one reason. That's really the problem here is these tests could fail for ... hundreds of reasons. – ZombieDev Feb 17 '14 at 23:07
  • @ZombieDev Write Test with 100% Code coverage and NO assert. This testsuite will result green forever. If the requirements are, that tests should fail for one reason, than you should have 100 tests, 100 assertions and 1 assertion per test. if you have 100 tests and only 20 test have assertion, 80 tests dont then I would recommend to remove 80 test because there is zero value for code quality and costs for running this fake tests. – Peter Ebelsberger Jul 25 '17 at 19:48