1

We are building a tool for average case runtime analysis of Java Byte Code programs. One part of this is measuring real runtimes. So we would take an arbitrary, user provided method that may or may not have a result and may or may not have side effects (examples include Quicksort, factorial, dummy nested loops, ...) and execute it (using reflection), measuring the elapsed time. (Whether or not we benchmark properly at all is besides the point here.)

In the benchmarking code, we obviously don't do anything with the result (and some methods won't even have results). Therefore, there is no telling what the JIT may do, and we have in fact observed that it seems to optimise the whole benchmarked method call away on occasion. As the benchmarked methods are not used in isolation in reality, this renders the benchmark useless.

How can we prevent JIT from doing that? We don't want to turn it off completely because then benchmarking takes ages, and we want to benchmark "real" runtimes anyway (so we want JIT to be active inside the method).

I am aware of this question but the given scenario is too narrow; we do not know the result type (if there is one) and can therefore not use the result in some fashion the JIT does not see as useless.

Community
  • 1
  • 1
Raphael
  • 9,779
  • 5
  • 63
  • 94
  • 1
    Pity that this never got a real answer. +1 for "why do I have to re-explain my question in so many comments" syndrome solidarity. – sqykly Dec 01 '15 at 01:26

3 Answers3

3

The simple solution is to write a more realistic benchmark which does something almost useful so it will not be optimised away.

There are a number of trick to confuse the JIT, but these are unlikely to help you.

Here is example of a benchmark where the method is called via reflection, MethodHandle and compiled to nothing.

import java.lang.invoke.*;
import java.lang.reflect.*;

public class Main {
    public static void main(String... args) throws Throwable {
        for (int j = 0; j < 5; j++) {
            testViaReflection();
            testViaMethodHandle();
            testWithoutReflection();
        }
    }

    private static void testViaReflection() throws NoSuchMethodException, IllegalAccessException, InvocationTargetException {
        Method nothing = Main.class.getDeclaredMethod("nothing");
        int runs = 10000000; // triggers a warmup.
        long start = System.nanoTime();
        Object[] args = new Object[0];
        for (int i = 0; i < runs; i++)
            nothing.invoke(null, args);
        long time = System.nanoTime() - start;
        System.out.printf("A call to %s took an average of %.1f ns using reflection%n", nothing.getName(), 1.0 * time / runs);
    }

    private static void testViaMethodHandle() throws Throwable {
        MethodHandle nothing = MethodHandles.lookup().unreflect(Main.class.getDeclaredMethod("nothing"));
        int runs = 10000000; // triggers a warmup.
        long start = System.nanoTime();
        for (int i = 0; i < runs; i++) {
            nothing.invokeExact();
        }
        long time = System.nanoTime() - start;
        System.out.printf("A call to %s took an average of %.1f ns using MethodHandle%n", "nothing", 1.0 * time / runs);
    }

    private static void testWithoutReflection() {
        int runs = 10000000; // triggers a warmup.
        long start = System.nanoTime();
        for (int i = 0; i < runs; i++)
            nothing();
        long time = System.nanoTime() - start;
        System.out.printf("A call to %s took an average of %.1f ns without reflection%n", "nothing", 1.0 * time / runs);
    }

    public static void nothing() {
        // does nothing.
    }
}

prints

A call to nothing took an average of 6.6 ns using reflection
A call to nothing took an average of 10.7 ns using MethodHandle
A call to nothing took an average of 0.4 ns without reflection
A call to nothing took an average of 4.5 ns using reflection
A call to nothing took an average of 9.1 ns using MethodHandle
A call to nothing took an average of 0.0 ns without reflection
A call to nothing took an average of 4.3 ns using reflection
A call to nothing took an average of 8.8 ns using MethodHandle
A call to nothing took an average of 0.0 ns without reflection
A call to nothing took an average of 5.4 ns using reflection
A call to nothing took an average of 13.2 ns using MethodHandle
A call to nothing took an average of 0.0 ns without reflection
A call to nothing took an average of 4.9 ns using reflection
A call to nothing took an average of 8.7 ns using MethodHandle
A call to nothing took an average of 0.0 ns without reflection

I had assumed MethodHandles to be faster than reflection but it doesn't appear so.

dimo414
  • 47,227
  • 18
  • 148
  • 244
Peter Lawrey
  • 525,659
  • 79
  • 751
  • 1,130
  • Even if you know what you are trying to benchmark, its hard to understand what the results mean. I am not sure what you are trying to achieve. – Peter Lawrey Aug 29 '12 at 12:22
  • We get some method (from the user) and want to figure out its runtime for a given set of inputs, respectively. Therefore, the benchmarking code has to be generic (there are almost no restrictions on the given method). We are fine with the JIT optimising *inside* that method (in fact, we want it to) but in some cases, the JIT optimises the whole method away. Is that not clear from the question, should I edit? – Raphael Aug 29 '12 at 12:24
  • In which case your benchmarking code should tell the user that their method has been optimised away to nothing. Anything else would be obscuring what the JIT really does. IMHO This is exactly the sort of thing a tool should tell the user. – Peter Lawrey Aug 29 '12 at 12:28
  • Even if you could reliably detect this optimisation, it is not the intended result. We would benchmark nothing where, in reality, something *will* be executed. – Raphael Aug 29 '12 at 15:08
  • Something will be executed only when the code is not optimised away. It appears you want something to be executed when normally it wouldn't be. You could just add a little bit to the time to compensate for this. ;) – Peter Lawrey Aug 29 '12 at 15:10
  • No, the code *will* be executed in real use, because *there* someone will use the result (and the call will thus not be optimised away). – Raphael Aug 29 '12 at 15:11
  • In that case, you can use the result in your benchmark to simulate that. e.g. store it in a pre allocated array or count the number of `null` values in a `static` field which isn't used but won't be optimised away. – Peter Lawrey Aug 29 '12 at 15:13
  • I don't know which type the result has, or even if there is one. (Please see the updated question.) – Raphael Aug 29 '12 at 15:15
  • If you are calling it via reflections, there is always an object result, even if its always `null`. Of course, calling it via reflections is fairly slow and using a MethodHandle would be better. Can you give an example showing how you are calling a method but cannot determine what it returns? – Peter Lawrey Aug 29 '12 at 15:18
  • I guess it is worthwhile to check whether JIT cares for reflection access to the result. (We tend to benchmark for large inputs with runtimes of the order of 10s and more, so the overhead of reflection should be negligible. But thanks for the pointer, I'll pass it on to the colleague responsible.) – Raphael Aug 29 '12 at 15:23
  • I don't know what you mean by example. We don't restrict the space of methods the user hands us, so this is inherent. – Raphael Aug 29 '12 at 15:25
  • MethodHandles in Java 7 can be faster as they are optimised more efficiently. Using reflections prevents the method being inlined and optimised away. The contents can be optimised away, e.g. if it always returns the same value it could be reduced to a `return`. – Peter Lawrey Aug 29 '12 at 15:25
  • How does the user hand you a method? Is it via reflection or do they give you a Runnable or the like? – Peter Lawrey Aug 29 '12 at 15:26
  • Huh, so reflection *should* already prevent what we observe. So I guess the problem lies in the (dummy) algorithms we use for testing. Have to investigate. Users give us a JAR or a folder with class files and selects the method they want to investigate (in our tool). So we load the method via reflection, yes. – Raphael Aug 29 '12 at 15:26
  • Method.getReturnType() gives you the return type. But I suspect you don't need it now. – Peter Lawrey Aug 29 '12 at 15:28
  • 1
    Were you running this with any particular `java` settings? Running `java Main` with your example `Main` class fails to generate the output you posted with `java version "1.7.0_11"` on Cygwin. Instead the without reflection case consistently takes appx 1.4-1.2ns. – dimo414 Mar 04 '13 at 02:32
  • @dimo414 No options but an earlier version of Java 7. I know they were working on improving the performance. – Peter Lawrey Mar 05 '13 at 23:11
  • Why would an earlier version behave better? – dimo414 Mar 05 '13 at 23:45
  • @dimo414 I missed what you meant. It could be an earlier version optimised some code away that they later decided was unsafe to do. The time of 0.0 is not realistic unless the loop has been removed. – Peter Lawrey Mar 07 '13 at 07:09
0

I do not believe there is any way to selectively disable JIT optimizations, except for some of the experimental ones (like escape analysis).

You say this:

We don't want to turn it off completely because then benchmarking takes ages, and we want to benchmark "real" runtimes anyway.

But what you are trying to do is precisely that. In a real runtime, method calls will be inlined and the will be optimized away if they don't do anything. So by inhibiting these optimizations you would be getting measurements for method execution time that don't match what actually happens in a real program.

Stephen C
  • 698,415
  • 94
  • 811
  • 1,216
  • If I want to benchmark the runtime of method `m` that would be used in a useful context, it does not make sense to optimise `m` away. I am not talking about any (superfluous) calls to helper methods but the "main" method. – Raphael Aug 29 '12 at 12:14
  • You miss my point. If the "main" method would be optimized in a real program, doesn't that then mean that not optimizing it away in a benchmark would give a false result? Or perhaps you are not explaining yourself clearly enough in your question ... – Stephen C Aug 29 '12 at 14:08
  • It is only the main method for the purpose of benchmarking (say, a method that sorts an array computes factorials), not in the "real" world. – Raphael Aug 29 '12 at 15:10
  • 1
    The fact remains, if a method doesn't produce any results and doesn't have any side-effects on any objects, then it is not a realistic piece of code, and any benchmark marks that use it are dubious ... irrespective of how the JIT compiler optimizes them. Change the methods so that they are realistic and you won't have that problem. Apart from that, I cannot think of a solution. – Stephen C Aug 30 '12 at 01:33
0

The purpose of a benchmark is to get as close to actual performance as possible, so I don't see what you would gain here. If you suspect that the JIT will do certain things, and you wouldn't want to actually disable it in normal use, your best bet is to build the benchmark with that assumption. If there are ways you can write the benchmark that would stress it out and make it behave inefficiently under the JIT, that might be useful too since running the benchmark under a profiler would help figure out when it breaks down in its efficiency.

Mike Thomsen
  • 36,828
  • 10
  • 60
  • 83