13

Based on the discussions around an answer to this question, I discovered a really strange behaviour of the Java Hotspot optimizer. The observed behaviour can at least be seen in the Oracle VM 1.7.0_17, but seem to occur in older Java 6 versions as well.

First of all, I was already aware of the optimizer obviously being aware that some methods in the standard API are invariant and have no side effects. When executing a loop like double x=0.5; for(double d = 0; d < Math.sin(x); d += 0.001);, the expression Math.sin(x) is not evaluated for each iteration, but the optimizer is aware that the method Math.sin has no relevant side effects and that the result is invariant, as long as x is not modified in the loop.

Now I noticed, that simply changing x from 0.5 to 1.0 disabled this optimization. Further tests indicate that the optimization is only enabled if abs(x) < asin(1/sqrt(2)). Is there a good reason for that, which I don't see, or is that an unnecessary limitation to the optimizing conditions?

Edit: The optimization seem to be implemented in hotspot/src/share/vm/opto/subnode.cpp

Community
  • 1
  • 1
jarnbjo
  • 33,923
  • 7
  • 70
  • 94
  • 6
    How do you know that "the expression Math.sin(x) is not evaluated for each iteration"? Have you looked at the assembly code? Or measured time? Also note that `Math.sin` is an intrinsic method in Java 1.7 (possibly before) so the code run is not the Java code shown in the JDK source... – assylias Mar 12 '13 at 14:25
  • @assylias: By measuring time, but you have a good point. I wonder if it is the actual implementation of Math.sin, which is optimized for arguments < asin(1/sqrt(2)) and that it has nothing to do with the loop condition. – jarnbjo Mar 12 '13 at 14:42
  • @jarbjo The implementation for x86_64 cpus is here: http://hg.openjdk.java.net/jdk7u/jdk7u/hotspot/file/6e9aa487055f/src/cpu/x86/vm/stubGenerator_x86_64.cpp around line 2878. – assylias Mar 12 '13 at 14:53

1 Answers1

2

I think your question about specifically Oracle JVM, because implementation of Math is implementation-dependent. Here is good answer about Dalvik implementation for example: native code for Java Math class

Generally

  1. sin(a) * sin(a) + cos(a) * cos(a) = 1
  2. sin(pi/2 - a) = cos(a)
  3. sin(-a) = -sin(a)
  4. cos(-a) = cos(a)

so we don't need sin/cos functions implementation for x < 0 or x > pi/4.

I suppose this is the answer (http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=5005861):

We are aware of the almabench results and and osnews article on trigonometric performance. However, the HotSpot implementation of sin/cos on x86 for years has used and continues to use fsin/fcos x87 instructions in a range where those instructions meet the quality of implementation requirements, basically [-pi/4, pi/4]. Outside of that range, the results of fsin/fcos can be anywhere in the range [-1, 1] with little relation to the true sine/cosine of the argument. For example, fsin(Math.PI) only gets about half the digits of the result correct. The reason for this is that the fsin/fcos instruction implementations use a less than ideal algorithm for argument reduction; the argument reduction process is explained in bug 4857011.

Conclusion: you have seen results of argument reduction algorithm in action, not the limitation of optimization.

Community
  • 1
  • 1
  • To what extent is the fact that `Math.Sin(Math.Pi)` does not yield precisely zero a "feature"? I would posit that most real-world code that calls `Math.Sin` does so with arguments that have about a 1/4LSB to 1/2LSB rounding error; argument reduction using `Math.Pi` would counteract this rounding error in most real-world applications, while argument reduction using π preserves it. – supercat Jun 03 '14 at 20:47