3

The normal implementation of Math.abs(x) (as implemented by Oracle) is given by

public static double abs(double a) {
  return (a <= 0.0D) ? 0.0D - a : a;
}

Isn't it faster to just set the one bit coding for the sign of the number to zero (or one)? I suppose that there is only one bit coding the sign of the number, and that it is always the same bit, but I may be wrong in this.

Or are our computers generally unfit to do operations on single bits with atomary instructions?

If a faster implementation is possible, can you give it?

edit:

It has been pointed out to me that Java code is platform independent, and as such it cannot depend on what are the atomary instructions of single machines. To optimize code, however, the JVM hotspot optimizer does consider the specifics of the machine, and will maybe apply the very optimization under consideration.

Through a simple test, however, I have found that at least on my machine, the Math.abs function doesn't seem to get optimized to a single atomary instructions. My code was as follows:

    long before = System.currentTimeMillis();
    int o = 0;
    for (double i = 0; i<1000000000; i++)
        if ((i-500)*(i-500)>((i-100)*2)*((i-100)*2)) // 4680 ms
            o++;
    System.out.println(o);
    System.out.println("using multiplication: "+(System.currentTimeMillis()-before));
    before = System.currentTimeMillis();
    o = 0;
    for (double i = 0; i<1000000000; i++)
        if (Math.abs(i-500)>(Math.abs(i-100)*2)) // 4778 ms
            o++;
    System.out.println(o);
    System.out.println("using Math.abs: "+(System.currentTimeMillis()-before));

Which gives me the following output:

234
using multiplication: 4985
234
using Math.abs: 5587

Supposing that multiplication is performed by an atomary instruction, it seems that at least on my machine the JVM hotspot optimizer doesn't optimize the Math.abs function to a single instruction operation.

Tim Kuipers
  • 1,705
  • 2
  • 16
  • 26

2 Answers2

5

My first thought was, it’s because of NaN (Not-a-number) values, i.e. if the input is NaN it should get returned without any change. But this seems to be not a requirement as harold’s test has shown that the JVM’s internal optimization does not preserve the sign of NaNs (unless you use StrictMath).

The documentation of Math.abs says:

In other words, the result is the same as the value of the expression: Double.longBitsToDouble((Double.doubleToLongBits(a)<<1)>>>1)

So the option of bit manipulations was known to the developers of this class but they decided against it.

Most probably, because optimizing this Java code makes no sense. The hotspot optimizer will replace its invocation with the appropriate FPU instruction once it encountered it in a hotspot, in most environments. This happens with a lot of java.lang.Math methods as well as Integer.rotateLeft and similar methods. They might have a pure Java implementation but if the CPU has an instruction for it, it will be used by the JVM.

Holger
  • 285,553
  • 42
  • 434
  • 765
  • Is NaN really required to be returned unmodified? A NaN with the sign changed is, after all, still a NaN.. – harold Oct 29 '13 at 13:32
  • Ok, apparently the sign of NaN is [not preserved](http://ideone.com/MydSpA) by `Math.abs`, so unconditionally clearing the sign bit is completely valid – harold Oct 29 '13 at 13:40
  • So you have a good prove that the Java code inside of `java.lang.Math.abs` is not really executed (as it would preserve the sign) but replaced by an intrinsic function. You may compare the result with `java.lang.StrictMath.abs`… – Holger Oct 29 '13 at 13:47
  • Ok. You're right. OP asked about the regular `Math.abs` though – harold Oct 29 '13 at 13:55
  • It seems that the hotspot optimizer doesn't do a real good job then, since it seems to take more than one operation to get to the absolute value. In a test I just ran, `Math.abs([1]) > Math.abs([2])` was slower than `[1] * [1] > [2] * [2]`, for some expressions [1] and [2]. – Tim Kuipers Oct 29 '13 at 14:05
  • @Tim Kuipers: if you just want to compare the magnitude of two values, you might use the multiplication. But for the general implementation, a multiplication is not an alternative to `abs`. So `abs` doesn’t have to be faster than a multiplication. It only has to be faster than the bit manipulation alternatives. By the way, on my machine it needs billions of invocations to see a difference. – Holger Oct 29 '13 at 14:19
1

I'm not a java expert, but I think the problem is that this definition is expressible in the language. Bit operations on floats are machine format specific, so not portable, and thus not allowed in Java. I'm not sure if any of the jit compilers will do the optimization.

DrC
  • 7,528
  • 1
  • 22
  • 37
  • In Java the format is well specified (and most FPUs use this format anyway). See [doubleToRawLongBits](http://docs.oracle.com/javase/7/docs/api/java/lang/Double.html#doubleToRawLongBits(double)) – Holger Oct 29 '13 at 12:04