35

I noticed a strange behavior when multiplying decimal values in C#. Consider the following multiplication operations:

1.1111111111111111111111111111m * 1m = 1.1111111111111111111111111111 // OK
1.1111111111111111111111111111m * 2m = 2.2222222222222222222222222222 // OK
1.1111111111111111111111111111m * 3m = 3.3333333333333333333333333333 // OK
1.1111111111111111111111111111m * 4m = 4.4444444444444444444444444444 // OK
1.1111111111111111111111111111m * 5m = 5.5555555555555555555555555555 // OK
1.1111111111111111111111111111m * 6m = 6.6666666666666666666666666666 // OK
1.1111111111111111111111111111m * 7m = 7.7777777777777777777777777777 // OK
1.1111111111111111111111111111m * 8m = 8.888888888888888888888888889  // Why not 8.8888888888888888888888888888 ?
1.1111111111111111111111111111m * 9m = 10.000000000000000000000000000 // Why not 9.9999999999999999999999999999 ?

What I cannot understand is the last two of above cases. How is that possible?

Dave Clemmer
  • 3,741
  • 12
  • 49
  • 72
user1126360
  • 403
  • 1
  • 4
  • 8

2 Answers2

73

decimal stores 28 or 29 significant digits (96 bits). Basically the mantissa is in the range -/+ 79,228,162,514,264,337,593,543,950,335.

That means up to about 7.9.... you can get 29 significant digits accurately - but above that you can't. That's why both the 8 and the 9 go wrong, but not the earlier values. You should only rely on 28 significant digits in general, to avoid odd situations like this.

Once you reduce your original input to 28 significant figures, you'll get the output you expect:

using System;

class Test
{
    static void Main()
    {
        var input = 1.111111111111111111111111111m;
        for (int i = 1; i < 10; i++)
        {
            decimal output = input * (decimal) i;
            Console.WriteLine(output);
        }
    }
}
Jon Skeet
  • 1,421,763
  • 867
  • 9,128
  • 9,194
  • I'm not sure what exactly you mean by "rely on" 28 digits. Do you mean explicitly rounding off things like the results of divisions? The fact that `Decimal` uses base-ten exponents rather than base-two means that the nominal numeric value of a `Decimal` will match that of its concise string representation, but the fact that it's a floating-point type would suggest that equality-testing with `Decimal` is apt to have the same problems as with any other floating-point type. – supercat Aug 16 '13 at 21:47
  • @supercat: I mean that if you've got 29 digits you need to represent, you may not be able to do so exactly, but that you can represent up to 28 digits exactly. I think it's much more reasonable to rely on equality for `decimal` than for `float`/`double` - for one thing, you don't need to worry about the possibility of some operations being performed with more accuracy depending on whether the value is in a register or not. It would still have to be done with significant care, but in many cases I think it would be fine. – Jon Skeet Aug 17 '13 at 06:42
2

Mathematicians distinguish between rational numbers and the superset real numbers. Arithmetic operations on rational numbers is well defined and precise. Arithmetic (using the operators of addition, subtraction, multiplication, and division) on real numbers is "precise" only to the extent that either the irrational numbers are left in an irrational form (symbolic) or possibly convertible in some expressions to a rational number. Example, the square root of two has no decimal (or any other rational base) representation. However, the square root of two multiplied by the square root of two is rational - 2, obviously.

Computers, and the languages running on them, generally implement only rational numbers - hidden behind names such as int, long int, float, double precision, real (FORTRAN) or some other name that suggests real numbers. But the rational numbers included are limited, unlike the set of rational numbers whose range is infinite.

Trivial example - not found on computers. 1/2 * 1/2 = 1/4 That works fine if you have a class of Rational numbers AND the size of the numerators and denominators do not exceed the limits of integer arithmetic. so (1,2) * (1,2) -> (1,4)

But if the rational numbers available were decimal AND limited to a single digit after the decimal - impractical - but representative of the choice made when choosing an implementation for approximating rational (float/real, etc.) numbers, then 1/2 would be perfectly convertible to 0.5, then 0.5 + 0.5 would equal 1.0, but 0.5 * 0.5 would have to be either 0.2 or 0.3!

Fred Mitchell
  • 2,145
  • 2
  • 21
  • 29