I am working on a legacy software product, where we from time to time get's pretty close to shipping - or do ship, changes that degrade performance a lot. The issue stem from some old design choices where a seemingly innocent change in some cause can case a huge rise in database queries for certain tasks.
To be a bit upfront we would like to run some benchmark tests automatically as part of our Continous Integration build chain.
I have been looking for frameworks to implement test, in a mstest/nunit attribute based style.
I have read various posts about the subject and searched Google and found a few interesting tips (like this: .NET benchmarking frameworks).
The tools I seem to be able to find on NuGet and Google all seems to be smaller experimental tools. And in that case I might be better of doing something we have the full control over.
The question is, do any one have experience setting this up, can anyone recommend a tool, or am I best of creating my own testrunner to get something that fits our framework precisely.
The basic requirements are:
- Able to measure either time or a number as a result
- For algorithmic tests determine a number of loops to run the code to get an average
- For database tests, be able to run something up front that doesn't count as test time (e.g. clear SQL cache or setup our framework)
- Output data either in database or in some sort of structered format (mstest trx format would be really nice for easy CruiseControl.Net support
Additional requirements could be
- Ways to group tests results, e.g. all tests of a given module or class could be bundled
- Reporting abilities - show graphs, detect trends (is test over time becoming slower)
As I list the requirements it feels it might be very specific, and thus I will be better of implementing our own small framework. In that case I think something like outlined below.
TestRunner
- Look for test classes based on attribute, execute all methods with test attribute, store result (either in file, direct output or database)
TestMethods
Something like
[Benchmark]
public TimeSpan Job_GetAllRows_TimeOnColdCache()
{
//Arrange
...
//Act
return Measure( () => Job.GetAllRows());
}
[Benchmark]
public int Job_GetAllJobs_CallsToDatabase()
{
//Arrange
...
Database.NumberOfQueriesExecuted = 0;
//Act
Job.DoHeavyCalculation();
return Database.NumberOfQueriesExecuted;
}
[Benchmark]
public TimeSpan Job_HeavyCalculation_AverageTimeToCalculate()
{
//Arrange
...
//Act
return Measure(Measurement.Average, 1000, () => Job.DoHeavyCalculation());
}
The Measure class is just a wrapper over starting a Stopwatch, and do a try-finally and return Stopwatch elapsed time in finally.
I hope someone has a comment on this - and ideally has experience with a framework that solves the above and is ready to use.