I've been getting requests lately from management to create reports of the number of assertions run by the tests for our software. They want this so they can tell if people are writing tests or not. My inclination is to just tell them "no you can't have that because you don't need it" but that doesn't seem to satisfy them.
Part of the problem is that our teams are writing long test cases with lots of assertions and they want to say they've tested some new feature because they've added more assertions to an existing test case.
So my question is: Does anyone have some good, authoritative (as much as it really can be), resources or articles or books even that describe how testing should be split into test cases or why counting assertions is bad?
I mean counting assertions or assertions per test as a measurement of if people are right tests is about as useful as counting lines of code per test. But they just don't buy it. I tried searching with Google but the problem is no one bothers to count assertions, so I can't really say "this is why it's a bad idea".