2

Let's say we have this kind of loop (pseudocode)

double d = 0.0
for i in 1..10 {
    d = d + 0.1
    print(d)
}

In C with printf("%f", d) I get this:

0.100000
0.200000
0.300000
...
1.000000

In C++ with cout << d I get this:

0.1
0.2
...
1

In Java with System.out.println(d) I get this:

0.1
0.2
0.3 (in debug mode, I see 0.30000000000004 there but it prints 0.3)
...
0.7
0.799999999999999
0.899999999999999
0.999999999999999

So my questions are these:

  1. Why is this simple code printed in Java so badly and is correct in C?
  2. How does this behave in other languages?
Nuffin
  • 3,882
  • 18
  • 34
user219882
  • 15,274
  • 23
  • 93
  • 138

3 Answers3

7

As answered here, this is not related to any language.

See here: What Every Programmer Should Know About Floating-Point Arithmetic

Real numbers are infinite. Computers are working with a finite number of bits (32 bits, 64 bits today). As a result floating-point arithmetic done by computers cannot represent all the real numbers. 0.1 is one of these numbers.

Note that is not an issue related to Ruby, but to all programming languages because it comes from the way computers represent real numbers.

Community
  • 1
  • 1
Manuel Selva
  • 18,554
  • 22
  • 89
  • 134
6

Why is this simple code printed in Java so badly and is correct in C?

Since you are not comparing the same operations, you will get different result.

The behaviour of double is exactly the same across different languages as it uses the hardware to perform these operations in each case. The only difference is the methods you have chosen to display the result.

In Java, if you run

double d = 0;
for (int i = 1; i <= 10; i++)
    System.out.printf("%f%n", d += 0.1);

it prints

0.100000
0.200000
0.300000
0.400000
0.500000
0.600000
0.700000
0.800000
0.900000
1.000000

If you run

double d = 0;
for (int i = 0; i < 8; i++) d += 0.1;
System.out.println("Summing 0.1, 8 times " + new BigDecimal(d));
System.out.println("How 0.8 is represented " + new BigDecimal(0.8));

you get

Summing 0.1, 8 times 0.79999999999999993338661852249060757458209991455078125
How 0.8 is represented 0.8000000000000000444089209850062616169452667236328125
Peter Lawrey
  • 525,659
  • 79
  • 751
  • 1,130
  • 1
    Does it mean that in every language the double value 0.8 is stored as 0.79999999999999? Because this __ugly__ value I can see in debug mode in the variable view. – user219882 Jan 17 '12 at 14:31
  • @Tomas: the way you compute `.8` yield `.79999999999999993339`, but if you just enter the literal, you'll get `.80000000000000004441`. – Fred Foo Jan 17 '12 at 14:33
  • No, the value of 0.8 is not exact, but the sum or 0.1 + 0.1 + ... 0.1 is slightly less than the representation for 0.8. See my edit. – Peter Lawrey Jan 17 '12 at 14:35
  • And what happens if I try to compute with precision on 20 decimal places? Will it be exact with BigDecimal or not? – user219882 Jan 17 '12 at 14:41
  • With BigDecimal, its as accurate as you specify. It also supports different round strategies. This is why its a popular choice when dealing with money. – Peter Lawrey Jan 17 '12 at 14:45
  • 1
    If you construct a `BigDecimal` using the `new BigDecimal(double)` constructor, it will have the exactly same value as the passed `double`, that means that `new BigDecimal(0.1)` will have the "wrong" value. Use the `new BigDecimal(String)` constructor instead, it's value will be precise, i.e. the value of `new BigDecimal("0.1")` will be precisely 0.1. – Natix Jan 17 '12 at 14:53
  • Technically the behaviour may not be _exactly_ the same in each language. Certain libraries, compilers, and compiler options, allow modification of the floating point mode on the CPU as well as specifying truncation flags. It is therefore possible that equivalent code in two languages may yield different results. – edA-qa mort-ora-y Jan 17 '12 at 16:26
  • @natix In this case using `new BigDecimal(double)` is the right answer because I want to show the exact represented value without any rounding in the conversion to a String. `new BigDecimal(String)` wouldn't show this and is only useful if you have a String in the first place. Another option to minimise rounding error is to use `BigDecimal.valueOf(double)` – Peter Lawrey Jan 17 '12 at 16:30
  • 1
    @Peter Lawrey I agree, I was just pointing this out for Tomas. – Natix Jan 17 '12 at 16:59
2
  1. Because of the way the print routines behave. 0.1 cannot be exactly represented in a binary floating point format.
  2. In Python:

    >>> print('%.20f' % (.1 * 8))
    0.80000000000000004441
    >>> d = .0
    >>> for i in xrange(10):
    ...  d += .1
    ...  print('%.20f' % d)
    ... 
    0.10000000000000000555
    0.20000000000000001110
    0.30000000000000004441
    0.40000000000000002220
    0.50000000000000000000
    0.59999999999999997780
    0.69999999999999995559
    0.79999999999999993339
    0.89999999999999991118
    0.99999999999999988898
    

    But note:

    >>> print('%.20f' % .8)
    0.80000000000000004441
    
Fred Foo
  • 355,277
  • 75
  • 744
  • 836