Obviously java.lang.StrictMath
contains additional functions (hyperbolics etc.) which java.lang.Math
doesn't, but is there a difference in the functions which are found in both libraries?

- 22,495
- 17
- 107
- 124

- 5,210
- 11
- 48
- 68
-
6This question is entirely answered in the Javadoc. – user207421 Nov 20 '10 at 11:23
-
35@EJP - I believe that, on SO, RTFM is never a good answer. – ripper234 Nov 26 '10 at 19:57
4 Answers
The Javadoc for the Math
class provides some information on the differences between the two classes:
Unlike some of the numeric methods of class
StrictMath
, all implementations of the equivalent functions of classMath
are not defined to return the bit-for-bit same results. This relaxation permits better-performing implementations where strict reproducibility is not required.By default many of the
Math
methods simply call the equivalent method inStrictMath
for their implementation. Code generators are encouraged to use platform-specific native libraries or microprocessor instructions, where available, to provide higher-performance implementations ofMath
methods. Such higher-performance implementations still must conform to the specification forMath
.
Therefore, the Math
class lays out some rules about what certain operations should do, but they do not demand that the exact same results be returned in all implementations of the libraries.
This allows for specific implementations of the libraries to return similiar, but not the exact same result if, for example, the Math.cos
class is called. This would allow for platform-specific implementations (such as using x86 floating point and, say, SPARC floating point) which may return different results.
(Refer to the Software Implementations section of the Sine article in Wikipedia for some examples of platform-specific implementations.)
However, with StrictMath
, the results returned by different implementations must return the same result. This would be desirable for instances where the reproducibility of results on different platforms are required.

- 159,216
- 35
- 211
- 226
-
4But why would different platform-specific implementations want to produce different result? Isn't cosine universally defined? – Aivar Jan 07 '13 at 10:37
-
@Aivar: For the reasons listed in the quote from the `Math` class -- to take advantage of native methods available to the specific platform, which is (in many cases is likely to be) faster than using a software-based solution which is guaranteed to give exactly the same answer on all platforms. – coobird Jan 07 '13 at 16:11
-
ok, so it means that some platforms have chosen not to compute the most precise answer that fits in given amounts of bits, but have traded precision for efficiency? And different platforms have made different trade-offs? – Aivar Jan 07 '13 at 18:41
-
@Aivar That would seem to be the case from reading up on the linked Wikipedia article. Simply stated, the `Math` class' specification allows the use of platform-specific algorithms which will not necessarily return the same result as other platforms. – coobird Jan 10 '13 at 13:59
-
1@Aivar it's not merely precision vs. efficiency, but also that in many cases there is not necessarily one obvious "most precise answer". For instance, the Sine article coobird links to mentions "there is no standard algorithm for calculating sine". – dimo414 Mar 12 '14 at 15:07
-
Can it be safely assumed that the integer methods (e.g. `addExact`) behave identically regardless of whether `StrictMath` or `Math` is used? – Max Barraclough Feb 17 '21 at 16:12
@ntoskrnl As somebody who is working with JVM internals, I would like to second your opinion that "intrinsics don't necessarily behave the same way as StrictMath methods". To find out (or prove) it, we can just write a simple test.
Take Math.pow
for example, examining the Java code for
java.lang.Math.pow(double a, double b), we will see:
public static double pow(double a, double b) {
return StrictMath.pow(a, b); // default impl. delegates to StrictMath
}
But the JVM is free to implement it with intrinsics or runtime calls, thus the returning result can be different from what we would expect from StrictMath.pow
.
And the following code shows this calling Math.pow()
against StrictMath.pow()
//Strict.java, testing StrictMath.pow against Math.pow
import java.util.Random;
public class Strict {
static double testIt(double x, double y) {
return Math.pow(x, y);
}
public static void main(String[] args) throws Exception{
final double[] vs = new double[100];
final double[] xs = new double[100];
final double[] ys = new double[100];
final Random random = new Random();
// compute StrictMath.pow results;
for (int i = 0; i<100; i++) {
xs[i] = random.nextDouble();
ys[i] = random.nextDouble();
vs[i] = StrictMath.pow(xs[i], ys[i]);
}
boolean printed_compiled = false;
boolean ever_diff = false;
long len = 1000000;
long start;
long elapsed;
while (true) {
start = System.currentTimeMillis();
double blackhole = 0;
for (int i = 0; i < len; i++) {
int idx = i % 100;
double res = testIt(xs[idx], ys[idx]);
if (i >= 0 && i<100) {
//presumably interpreted
if (vs[idx] != res && (!Double.isNaN(res) || !Double.isNaN(vs[idx]))) {
System.out.println(idx + ":\tInterpreted:" + xs[idx] + "^" + ys[idx] + "=" + res);
System.out.println(idx + ":\tStrict pow : " + xs[idx] + "^" + ys[idx] + "=" + vs[idx] + "\n");
}
}
if (i >= 250000 && i<250100 && !printed_compiled) {
//presumably compiled at this time
if (vs[idx] != res && (!Double.isNaN(res) || !Double.isNaN(vs[idx]))) {
System.out.println(idx + ":\tcompiled :" + xs[idx] + "^" + ys[idx] + "=" + res);
System.out.println(idx + ":\tStrict pow :" + xs[idx] + "^" + ys[idx] + "=" + vs[idx] + "\n");
ever_diff = true;
}
}
}
elapsed = System.currentTimeMillis() - start;
System.out.println(elapsed + " ms ");
if (!printed_compiled && ever_diff) {
printed_compiled = true;
return;
}
}
}
}
I ran this test with OpenJDK 8u5-b31 and got the result below:
10: Interpreted:0.1845936372497491^0.01608930867480518=0.9731817015518033
10: Strict pow : 0.1845936372497491^0.01608930867480518=0.9731817015518032
41: Interpreted:0.7281259501809544^0.9414406865385655=0.7417808233050295
41: Strict pow : 0.7281259501809544^0.9414406865385655=0.7417808233050294
49: Interpreted:0.0727813262968815^0.09866028976654662=0.7721942440239148
49: Strict pow : 0.0727813262968815^0.09866028976654662=0.7721942440239149
70: Interpreted:0.6574309575966407^0.759887845481148=0.7270872740201638
70: Strict pow : 0.6574309575966407^0.759887845481148=0.7270872740201637
82: Interpreted:0.08662340816125613^0.4216580281197062=0.3564883826345057
82: Strict pow : 0.08662340816125613^0.4216580281197062=0.3564883826345058
92: Interpreted:0.20224488115245098^0.7158182878844233=0.31851834311978916
92: Strict pow : 0.20224488115245098^0.7158182878844233=0.3185183431197892
10: compiled :0.1845936372497491^0.01608930867480518=0.9731817015518033
10: Strict pow :0.1845936372497491^0.01608930867480518=0.9731817015518032
41: compiled :0.7281259501809544^0.9414406865385655=0.7417808233050295
41: Strict pow :0.7281259501809544^0.9414406865385655=0.7417808233050294
49: compiled :0.0727813262968815^0.09866028976654662=0.7721942440239148
49: Strict pow :0.0727813262968815^0.09866028976654662=0.7721942440239149
70: compiled :0.6574309575966407^0.759887845481148=0.7270872740201638
70: Strict pow :0.6574309575966407^0.759887845481148=0.7270872740201637
82: compiled :0.08662340816125613^0.4216580281197062=0.3564883826345057
82: Strict pow :0.08662340816125613^0.4216580281197062=0.3564883826345058
92: compiled :0.20224488115245098^0.7158182878844233=0.31851834311978916
92: Strict pow :0.20224488115245098^0.7158182878844233=0.3185183431197892
290 ms
Please note that Random
is used to generate the x and y values, so your mileage will vary from run to run. But good news is that at least the results of compiled version of Math.pow
match those of interpreted version of Math.pow
. (Off topic: even this consistency was only enforced in 2012 with a series of bug fixes from OpenJDK side.)
The reason?
Well, it's because OpenJDK uses intrinsics and runtime functions to implement Math.pow
(and other math functions), instead of just executing the Java code. The main purpose is to take advantage of x87 instructions so that performance for the computation can be boosted. As a result, StrictMath.pow
is never called from Math.pow
at runtime (for the OpenJDK version that we just used, to be precise).
And this arragement is totally legitimate according to the Javadoc of Math
class (also quoted by @coobird above):
The class Math contains methods for performing basic numeric operations such as the elementary exponential, logarithm, square root, and trigonometric functions.
Unlike some of the numeric methods of class StrictMath, all implementations of the equivalent functions of class Math are not defined to return the bit-for-bit same results. This relaxation permits better-performing implementations where strict reproducibility is not required.
By default many of the Math methods simply call the equivalent method in StrictMath for their implementation. Code generators are encouraged to use platform-specific native libraries or microprocessor instructions, where available, to provide higher-performance implementations of Math methods. Such higher-performance implementations still must conform to the specification for Math.
And the conclusion? Well, for languages with dynamic code generation such as Java, please make sure what you see from the 'static' code matches what is executed at runtime. Your eyes can sometimes really mislead you.
Did you check the source code? Many methods in java.lang.Math
are delegated to java.lang.StrictMath
.
Example:
public static double cos(double a) {
return StrictMath.cos(a); // default impl. delegates to StrictMath
}

- 7,518
- 4
- 34
- 53
-
20+1 for reading the Java source code. This is a strong point for Java over .NET: a large part of the source code to the Java API ships with the JDK in a file called src.zip. And what's not there can be downloaded now that the JVM is open sourced. Reading the Java source may not be the most advertised way of solving problems: it may seem like a bad idea as you're usually supposed to "abide by the public interface and not the implementation." However, reading the source has one strong advantage: it will always give you the truth. And sometimes that is the most valuable thing of all. – Mike Clark Nov 20 '10 at 10:36
-
1@Andrew Thanks for the tip. I just finished reading a tutorial on how to set that up in Visual Studio. Java may still have a slight advantage in that you can download the source code for the VM itself, not just its standard library (framework). Anyway, thanks! – Mike Clark Mar 21 '12 at 19:02
-
16Unfortunately, in this case the source code doesn't tell the whole truth. The JVM is free to replace the methods in Math with platform specific intrinsics. Intrinsics don't necessarily behave the same way as StrictMath methods, but their behavior *is* constrained by the documentation in the Math class. – ntoskrnl Mar 12 '14 at 13:48
-
3Key word there being "default" - on many platforms `Math` will *not* actually use `StrictMath`. – dimo414 Jun 01 '16 at 04:57
-
1
Quoting java.lang.Math:
Accuracy of the floating-point
Math
methods is measured in terms of ulps, units in the last place.
...
If a method always has an error less than 0.5 ulps, the method always returns the floating-point number nearest the exact result; such a method is correctly rounded. A correctly rounded method is generally the best a floating-point approximation can be; however, it is impractical for many floating-point methods to be correctly rounded.
And then we see under Math.pow(..), for example:
The computed result must be within 1 ulp of the exact result.
Now, what is the ulp? As expected, java.lang.Math.ulp(1.0)
gives 2.220446049250313e-16, which is 2-52. (Also Math.ulp(8)
gives the same value as Math.ulp(10)
and Math.ulp(15)
, but not Math.ulp(16)
.) In other words, we are talking about the last bit of the mantissa.
So, the result returned by java.lang.Math.pow(..)
may be wrong in the last of the 52 bits of the mantissa, as we can confirm in Tony Guan's answer.
It would be nice to dig up some concrete 1 ulp and 0.5 ulp code to compare. I'll speculate that quite a lot of extra work is required to get that last bit correct for the same reason that if we know two numbers A and B rounded to 52 significant figures and we wish to know A×B correct to 52 significant figures, with correct rounding, then actually we need to know a few extra bits of A and B to get the last bit of A×B right. But that means we shouldn't round intermediate results A and B by forcing them into doubles, we need, effectively, a wider type for intermediate results. (In what I've seen, most implementations of mathematical functions rely heavily on multiplications with hard-coded precomputed coefficients, so if they need to be wider than double, there's a big efficiency hit.)

- 22,495
- 17
- 107
- 124