15

I'm looking into why a test case is failing

The problematic test can be reduced to doing (4.0/9.0) ** (1.0/2.6), rounding this to 6 digits and checking against a known value (as a string):

#include<stdio.h>
#include<math.h>
int main(){
    printf("%.06f\n", powf(4.0/9.0, (1.0/2.6)));
}

If I compile and run this in gcc 4.1.2 on Linux, I get:

0.732057

Python agrees, as does Wolfram|Alpha:

$ python2.7 -c 'print "%.06f" % (4.0/9.0)**(1/2.6)'
0.732057

However I get the following result on gcc 4.4.0 on Linux, and 4.2.1 on OS X:

0.732058

A double acts identically (although I didn't test this extensively)

I'm not sure how to narrow this down any further.. Is this a gcc regression? A change in rounding algorithm? Me doing something silly?

Edit: Printing the result to 12 digits, the digit at the 7th place is 4 vs 5, which explains the rounding difference, but not the value difference:

gcc 4.1.2:

0.732057452202

gcc 4.4.0:

0.732057511806

Here's the gcc -S output from both versions: https://gist.github.com/1588729

dbr
  • 165,801
  • 69
  • 278
  • 343
  • To remove `printf()` behaving differently as a variable, print or inspect the bits of the memory holding the value, so you remove the "convert to string" part of the story. – unwind Jan 10 '12 at 12:17
  • 3
    The difference between those two is exactly the [machine epsilon](http://en.wikipedia.org/wiki/Machine_epsilon) of a 32 bit processor. Also gcc4.4's result is closer to the actual value of the expression. – David Brown Jan 10 '12 at 12:44

4 Answers4

8

Recent gcc version are able to use mfpr to do compile time floating point computation. My guess is that your recent gcc does that and use an higher precision for the compile time version. This is allowed by the at least the C99 standard (I've not looked in other one if it was modified)

6.3.1.8/2 in C99

The values of floating operands and of the results of floating expressions may be represented in greater precision and range than that required by the type; the types are not changed thereby.

Edit: your gcc -S results confirm that. I haven't checked the computations, but the old one has (after substituting memory for its constant content)

movss 1053092943, %xmm1
movss 1055100473, %xmm0
call powf

calling powf with the precomputed values for 4/9.0 and 1/2.6 and then printing the result after promotion to double, while the new one just print the float 0x3f3b681f promoted to double.

AProgrammer
  • 51,233
  • 8
  • 91
  • 143
  • Interesting, this sounds like the cause! Could you elaborate on what in the disassembly indicates this? (one thing - the older gcc4.1.2 presumably used mfpr - it's result matches other sources, including Daniel's answer) – dbr Jan 10 '12 at 13:57
4

I think the old gcc used double under the hood. Doing the calculation in Haskell and printing the results to full precision, I get

Prelude Text.FShow.RealFloat> FD ((4/9) ** (1/2.6))
0.73205748476369969512944635425810702145099639892578125
Prelude Text.FShow.RealFloat> FF ((4/9) ** (1/2.6))
0.732057511806488037109375

So the double result agrees with what gcc-4.1.2 produced and the float result with what gcc-4.4.0 does. The results gcc-4.5.1 produces here for float resp. double agree with the Haskell results.

As A Programmer cited, the compiler is allowed to use higher precision, the old gcc did, the new apparently doesn't.

Daniel Fischer
  • 181,706
  • 17
  • 308
  • 431
3

There are many players here. Gcc is most probably just going to forward the calculation to your floating point processor; you can check the disassembly for that.

You may check the binary results with a binary representation (from same wolfram/alpha):

float q=powf(4.0/9.0, (1.0/2.6));
unsigned long long hex=*reinterpret_cast<unsigned long long*>(&q);
unsigned long long reference=0x1f683b3f;
assert( hex==reference );

But also printf is a possible culprit: the decimal representation of that number may be the problem, too. You could try to write printf("%0.06f", 0.73205748 ); to test that.

xtofl
  • 40,723
  • 12
  • 105
  • 192
1

You should be able to distinguish between the format rounding differently and the math giving a different answer, just by printing more (all) significant digits.

If it looks the same when no rounding takes place, printf("%0.6f" is just rounding differently.


OK, with the old Linux+python environment I have to hand, I get:

Python 2.4.3 (#1, Jun 11 2009, 14:09:37)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-44)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> (4.0/9.0)**(1.0/2.6)
0.7320574847636997

which is different again.

Maybe it would be simpler to ask instead, how many significant figures are really significant for this unit test?

Useless
  • 64,155
  • 6
  • 88
  • 132
  • Updated answer with values printed to 12 digits - the the value in the 7th place changes between 4 and 5, so printf is rounding consistently – dbr Jan 10 '12 at 12:08
  • Your result rounds to the gcc 4.1.2 result, `round(0.7320574847636997, 6) == 0.732057`. The unittest would ideally be accurate to 6 figures as this how many digits is stored in the LUT format by default, but I'll see if I can reduce this to 5 – dbr Jan 10 '12 at 14:09