1

I have a performance critical piece of code that I would like to protect as a maven build step, i.e. JMH would run and check that performance hasn't degraded with the local changes.

How can I check such degradation using JMH?

I've found a few related links:

I've achieved automated performance testing before (though not with Java, and not in a CI environment). One key point to note is that you never run it as an absolute, since the machine that the benchmark is running on can vary. A BogoMips or test-dependent type of reference can be used as the relative comparison. The benchmark is then measured as some multiple of that reference time, with upper and lower bounds.

While you are typically wary of your benchmark slowing (degrading), it's also important to check the upper bound as well, as it may indicate an unexpected speedup (better hardware support), which should indicate a per-system/architecture bound needs to be checked.

Community
  • 1
  • 1
Alun
  • 541
  • 6
  • 16

3 Answers3

1

I would suggest to simply build a set of Runner options via the OptionsBuilder and call run on it from within a Junit test. While some authors recommend against this on the grounds of not running the benchmark in a "clean" environment, I think the effects are very marginal and probably irrelevant when comparing against a reference run in the same environment.

See here for the most trivial example of setting up a Runner manually.

Runner.run() ( or in the case of a single benchmark Runner.runSingle()) will then return a Collection<RunResult> or just RunResult, that assertions can be made against.

In order to do so you could simply use the Statistics (see docs here) you can extract from the RunResult via RunResult.getPrimaryResult().getStatistics() and assert against the numeric values you can extract from Statistics

... or use the isDifferent() method that gives you the option to compare two benchmark runs within a confidence interval (might be useful to automatically catch outliers in both directions).

Armin Braun
  • 3,645
  • 1
  • 17
  • 33
  • Thanks, that is pretty much what I've had coded up since I found the netty example. – Alun Mar 28 '16 at 22:44
  • I just came across [jmh-jenkins](https://github.com/blackboard/jmh-jenkins), which has a nice feature set; Baseline, Performance Change-Limits; though it does mean you need to be running inside jenkins... – Alun Mar 29 '16 at 03:39
0

The JMH Maven plugin does not support this. You would have to write your own Maven plugin to do this or you would have to use the java exec plugin within your build life cycle to execute your tests. You could write the benchmark results to a file and find another Maven plugin that reads a file and breaks the build if it does not match a given constraint.

However, I have my doubts that this is a good idea. If your code changes alter the benchmark significantly, it might just as likely be that your benchmark is not suitable for your code anymore. And even worse, your benchmark might become faster even though your code got slower because the benchmark does not longer reflect a real use case. Furthermore, you would have to find a baseline because benchmarks do not suite to measure "absolute" runtime.

There might be some corner cases for where this approach is appropriate, but you should consider if it is really worth the trouble.

Rafael Winterhalter
  • 42,759
  • 13
  • 108
  • 192
  • The caveats are all noted. We're running the tests on a dedicated baremetal machine, so like-to-like comparisons should be reliable. Just hoping to get some notification that _something_ changed, for better or worse, so that we are forced to look at it. Rather than waiting 3 months, and wondering which commit caused the problem... – Alun Apr 29 '16 at 20:34
0

Looks like it is possible, this netty.io commit added support for JMH wrapped inside a Junit Test.

I notice from the JMH Options:

-rf <type>      Result format type.
                See the list of available result formats first. 
-rff <filename> Write results to given file. 

meaning that I can tell Benchmark to output its result to a JSON file that I can then parse as part of the Junit run.

The last part is then comparing that Run to Something else inside Junit, perhaps SPECjvm2008 ?

Alun
  • 541
  • 6
  • 16