The normal implementation of Math.abs(x)
(as implemented by Oracle) is given by
public static double abs(double a) {
return (a <= 0.0D) ? 0.0D - a : a;
}
Isn't it faster to just set the one bit coding for the sign of the number to zero (or one)? I suppose that there is only one bit coding the sign of the number, and that it is always the same bit, but I may be wrong in this.
Or are our computers generally unfit to do operations on single bits with atomary instructions?
If a faster implementation is possible, can you give it?
edit:
It has been pointed out to me that Java code is platform independent, and as such it cannot depend on what are the atomary instructions of single machines. To optimize code, however, the JVM hotspot optimizer does consider the specifics of the machine, and will maybe apply the very optimization under consideration.
Through a simple test, however, I have found that at least on my machine, the Math.abs
function doesn't seem to get optimized to a single atomary instructions. My code was as follows:
long before = System.currentTimeMillis();
int o = 0;
for (double i = 0; i<1000000000; i++)
if ((i-500)*(i-500)>((i-100)*2)*((i-100)*2)) // 4680 ms
o++;
System.out.println(o);
System.out.println("using multiplication: "+(System.currentTimeMillis()-before));
before = System.currentTimeMillis();
o = 0;
for (double i = 0; i<1000000000; i++)
if (Math.abs(i-500)>(Math.abs(i-100)*2)) // 4778 ms
o++;
System.out.println(o);
System.out.println("using Math.abs: "+(System.currentTimeMillis()-before));
Which gives me the following output:
234
using multiplication: 4985
234
using Math.abs: 5587
Supposing that multiplication is performed by an atomary instruction, it seems that at least on my machine the JVM hotspot optimizer doesn't optimize the Math.abs
function to a single instruction operation.