Is the equality comparison for C# decimal types any more likely to work as we would intuitively expect than other floating point types?
2 Answers
I guess that depends on your intuition. I would assume that some people would think of the result of dividing 1 by 3 as the fraction 1/3, and others would think more along the lines of "Oh, 1 divided by 3 can't be represented as a decimal number, we'll have to decide how many digits to keep, let's go with 0.333".
If you think in the former way, Decimal
won't help you much, but if you think in the latter way, and are explicit about rounding when needed, it is more likely that operations that are "intuitively" not subject to rounding errors in decimal, e.g. dividing by 10, will behave as you expect. This is more intuitive to most people than the behavior of a binary floating-point type, where powers of 2 behave nicely, but powers of 10 do not.

- 10,358
- 1
- 26
- 46
-
-1 Since this is not really answering the question, except to rehash my previous answer. – Noldorin Sep 21 '11 at 01:31
-
@Noldorin - I disagree that it's a rehash, as your answer doesn't cover the issue that binary arithmetic is not intuitive to most. Also, I disagree with your answering "no" to the question of equality being more likely to work as expected. Not saying using equality is a good idea, but look at e.g. the amount of confused Javascript users on various web sites who have problems due to the Number type being binary – Jonas Høgh Sep 21 '11 at 06:11
-
It's nothing to do with being intuitive... It's simply the internal representation and the way rounding is done, which I discuss and quote from MSDN. Meh. – Noldorin Sep 21 '11 at 13:38
-
3@Noldorin - whatever. The question specifically asks about whether equality is intuitive. I would say that `(0.1m + 0.2m == 0.3m) == true` is a lot more intuitive than `(0.1f + 0.2f == 0.3f) == false` – Jonas Høgh Sep 21 '11 at 14:11
-
@JonasH: The one nice thing about `Decimal` is that when one displays numbers one can actually see their real values. The rules about when computations yield particular values are not necessarily more intuitive than with `double`. Addition, for example, is not associative. Adding one to a `Decimal` and then subtracting one may change its value, even though fixed-point numbers would never do such a thing. – supercat Aug 15 '13 at 22:36
Basically, no. The Decimal
type simply represents a specialised sort of floating-point number that is designed to reduce rounding error specifically in the base 10 system. That is, the internal representation of a Decimal
is in fact in base 10 (denary) and not the usual binary. Hence, it is a rather more appropriate type for monetary calculations -- though not of course limited to such applications.
From the MSDN page for the structure:
The Decimal value type represents decimal numbers ranging from positive 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335. The Decimal value type is appropriate for financial calculations requiring large numbers of significant integral and fractional digits and no round-off errors. The Decimal type does not eliminate the need for rounding. Rather, it minimizes errors due to rounding. For example, the following code produces a result of 0.9999999999999999999999999999 rather than 1.
A decimal number is a floating-point value that consists of a sign, a numeric value where each digit in the value ranges from 0 to 9, and a scaling factor that indicates the position of a floating decimal point that separates the integral and fractional parts of the numeric value.