I was trying to see how different languages handle floating point numbers. I know that there are some inherent issues in floating point representation, which is why if you do 0.3 + 0.6 in Python, you get 0.899999 and not 0.9
However, these snippets of code simply left me astounded:
double x = 0.1,
sum = 0;
for(int i=0; i<10; ++i)
sum += x;
printf("%.9lf\n",sum);
assert(sum == 1.0);
The above snippet works fine. It prints 1.0. However, the following snippet gives a runtime error due the assertion failing:
double x = 0.1,
sum = 0;
for(int i=0; i<10; ++i)
sum += x;
assert(sum == 1.0);
printf("%.9lf\n",sum);
The only change in the two snippets above is the order of the assert and printf statements. This leads me to think that printf is somehow modifying its arguments and rounding them off somehow.
Can someone please throw some light on this?