I'm looking into why a test case is failing
The problematic test can be reduced to doing (4.0/9.0) ** (1.0/2.6)
, rounding this to 6 digits and checking against a known value (as a string):
#include<stdio.h>
#include<math.h>
int main(){
printf("%.06f\n", powf(4.0/9.0, (1.0/2.6)));
}
If I compile and run this in gcc 4.1.2 on Linux, I get:
0.732057
Python agrees, as does Wolfram|Alpha:
$ python2.7 -c 'print "%.06f" % (4.0/9.0)**(1/2.6)'
0.732057
However I get the following result on gcc 4.4.0 on Linux, and 4.2.1 on OS X:
0.732058
A double
acts identically (although I didn't test this extensively)
I'm not sure how to narrow this down any further.. Is this a gcc regression? A change in rounding algorithm? Me doing something silly?
Edit: Printing the result to 12 digits, the digit at the 7th place is 4 vs 5, which explains the rounding difference, but not the value difference:
gcc 4.1.2:
0.732057452202
gcc 4.4.0:
0.732057511806
Here's the gcc -S
output from both versions: https://gist.github.com/1588729