5

I wrote the following C++ code:

float a, b;
int c;

a = 8.6;
b = 1.4;
c = a + b;

printf("%d\n", c);

The output is 10.

But when I run the following code:

float a, b;
int c;

a = 8.7;
b = 1.3;
c = a + b;

printf("%d\n", c);

The output is 9.

What is the difference between the two, as they are giving different outputs?

Mateen Ulhaq
  • 24,552
  • 19
  • 101
  • 135
chinmayaposwalia
  • 243
  • 5
  • 13

3 Answers3

15

There is no such number as 8.7 or 1.3 in floating point. There is a number 10, and a number -6.5, and a number 0.96044921875... but no 8.7 or 1.3.

At best, your computer can round 8.7 to the nearest floating point number, and round 1.3 to the nearest floating point number as well. The computer adds these rounded numbers to each other, and then rounds the result.

Do not use floating point numbers for money.

#include <stdio.h>
int main(int argc, char *argv[])
{
    float a = 8.7, b = 1.3;
    printf("Looks like: %.1f + %.1f = %.1f\n", a, b, a+b);
    printf("The truth: %.20f + %.20f = %.20f\n", a, b, a+b);
    return 0;
}

On an x86 GCC/Linux computer, I get the result:

Looks like: 8.7 + 1.3 = 10.0
The truth: 8.69999980926513671875 + 1.29999995231628417969 = 9.99999976158142089844

On a PPC GCC/OS X computer, I get the result:

Looks like: 8.7 + 1.3 = 10.0
The truth: 8.69999980926513671875 + 1.29999995231628417969 = 10.00000000000000000000

Notice how 8.7 and 1.3 are both rounded down in this particular case. If you chose numbers that get rounded up, you might see a number larger than 10 on the right hand side.

See What Every Computer Scientist Should Know About Floating-Point Arithmetic, by David Goldberg (link).

Dietrich Epp
  • 205,541
  • 37
  • 345
  • 415
  • 2
    Do not use **binary** floating-point for money. Decimal floating-point was standardized precisely for financial applications. – Pascal Cuoq Sep 10 '11 at 07:28
  • @Pascal Cuoq: Shouldn't one use fixed point decimal for money? In some places such arithmetic is legally mandated... – Dietrich Epp Sep 10 '11 at 18:41
2

Floating point numbers are not the same as real numbers and their behavior is quite different.

Real numbers are infinite, while floating point numbers are finite and can only represent a small subset of all the possible real numbers.

Since not all real numbers can be represented as floating point, a floating point assignment or operation may give you slightly different results than the same done in the real number space.

See the wikipedia entry on floating point for an introduction. The section about floating point accuracy is particularly interesting and gives other examples similar to yours.

Miguel Grinberg
  • 65,299
  • 14
  • 133
  • 152
0

There's no real difference between the two. They both behave in ways that are unpredictable.

What you're doing is equivalent to flipping a coin twice and asking what you did differently to get heads one time and tails the other. It's not that you did anything different, it's that this is what happens when you flip coins.

If you ask a person to add one third and two thirds using 6 digit decimal precision and then round down to an integer, you might get 0 and you might get 1. It will depend on things like whether they represent 2/3 as "0.666666" or "0.6666667" and they're both acceptable. So both 0 and 1 are acceptable answers. If you're not prepared to accept either answer, don't ask that kind of question.

David Schwartz
  • 179,497
  • 17
  • 214
  • 278
  • 2
    In floating point, 0.666667 is the ONLY acceptable representation of 2/3 with 6 decimal digits. Certain operations (+, -, *, /, sqrt) are required to be within 1/2 ULPS of the exact answer, and 0.666666 does not satisfy this criterion for computing the value of 2/3. – Dietrich Epp Sep 10 '11 at 05:39
  • I wasn't talking about floating point, I was talking about 6 digit decimal precision, which is a type of fixed point. (It was an analogy.) – David Schwartz Sep 10 '11 at 17:58
  • I was also talking about 6 digit decimal. 1/2 ULP means that the result of computing 2/3 must be between 1/2 x 10^-6 of 2/3, which is satisfied by the usual rounding methods taught in school. The only cases where you would be unsure of the result are those with a 5 in the last place, e.g., 0.5 can round to 0 or 1 depending on your choice of rules (round it up, round down, round to zero, round from zero, round towards even). However, there is no set of acceptable rules that cause 2/3 to ever round to 0.666666. – Dietrich Epp Sep 10 '11 at 18:50
  • In that case, bad things will happen if you take 2/3, subtract 1/3 twice, and then multiply by a million. The point was not to get bogged down in the details but to show why "wrong" output should be expected. – David Schwartz Sep 10 '11 at 19:03
  • But if you take 2/3 and subtract 1/3 twice, in IEEE binary floating point you are always guaranteed to get exactly 0, no matter what the precision. My point is that the results are only unexpected if you don't know the rules. – Dietrich Epp Sep 10 '11 at 19:09
  • We aren't talking about binary floating point. We're talking about 6-digit fixed precision decimal. It was meant to be a simple example. Yes, you can write a book about it. – David Schwartz Sep 10 '11 at 19:16