19

Is there a reason that a C# System.Decimal remembers the number of trailing zeros it was entered with? See the following example:

public void DoSomething()
{
    decimal dec1 = 0.5M;
    decimal dec2 = 0.50M;
    Console.WriteLine(dec1);            //Output: 0.5
    Console.WriteLine(dec2);            //Output: 0.50
    Console.WriteLine(dec1 == dec2);    //Output: True
}

The decimals are classed as equal, yet dec2 remembers that it was entered with an additional zero. What is the reason/purpose for this?

Robert Davey
  • 507
  • 1
  • 4
  • 13

3 Answers3

16

It can be useful to represent a number including its accuracy - so 0.5m could be used to mean "anything between 0.45m and 0.55m" (with appropriate limits) and 0.50m could be used to mean "anything between 0.495m and 0.545m".

I suspect that most developers don't actually use this functionality, but I can see how it could be useful sometimes.

I believe this ability first arrived in .NET 1.1, btw - I think decimals in 1.0 were always effectively normalized.

Jon Skeet
  • 1,421,763
  • 867
  • 9,128
  • 9,194
  • 2
    For the reasons explained, 0.5 and 0.50 *do have* different information. Precision is **very** relevant in some fields, namely mathematics and chemistry. – ANeves Jun 08 '10 at 11:39
  • That would be a nice theory, except that the number of digits returned by division operations has nothing to do with the number of digits in the divisor or dividend. For most situations requiring multiplication or division, it would seem like `Decimal` should offer a method to multiply by a `Double`, with the result rounded to a specified level of precision; if the result can't accommodate that much precision, throw an exception. Otherwise, `Decimal` loses its semantic advantages versus scaling values up by 100 (or the number of subdivisions per currency unit) and using `Double`. – supercat May 24 '12 at 22:28
  • @supercat: Hmm... you're right about the division part, certainly. It could still be that the theory is the ability to be able to represent a number and its accuracy, but the practice is that it's not well implemented for arithmetic. (It could still be useful when propagating data from another source, of course.) – Jon Skeet May 24 '12 at 22:35
  • @JonSkeet: If I were designing a `Decimal` type, I wouldn't include operators to multiply or divide two `Decimals`; instead I would require functions with arguments specifying precision. I'd probably include `Decimal` times `Integer` and `Decimal` times `Long` operators, though. I'd also throw an exception any time an operator would truncate precision. For most financial applications, though, I would think the best type would in many cases be a concatenation of a `Double` with a scaling factor (specified as a number, rather than a power, to allow for non-decimal currencies). – supercat May 24 '12 at 22:45
  • @supercat: I don't see why a `Double` (which *inherently* uses a power of 2, not a number, as a scale - once that's removed it's not really a `Double` any more) would be a wise choice. Nor do I think that it's really important to support non-decimal currencies, to be honest. How many developers' lives would benefit from that support? – Jon Skeet May 24 '12 at 22:48
  • @JonSkeet: I don't know to what extent other countries still use non-decimal currencies, but any system based on `Decimal` will be imprecise on a non-decimal currency unless everything is scaled, and if things are scaled I don't see any real advantage to `Decimal` over `Double`. Things involving exponents, such as interest calculations, have to be performed as `Double`, and when multiplying a `Decimal` by an approximated number stored in a `Double`, the result can only be as precise as the `Double`. Too bad .net doesn't support `Long Double`. I wonder how long fast math hardware will? – supercat May 24 '12 at 23:18
  • @supercat: Why do interest calculations have to be performed as `Double`? You've made a lot of assertions here which I think need considerably more explanation. Fundamentally `Double` will always be tied to base 2, whereas everywhere I know of uses base 10, making `decimal` a more natural fit. If you'd said we should use a `long` + scaling, that would have made more sense to me. But I think we're really veering into the "inappropriate for comments" area. I suggest you blog about this with more details if you're so inclined. – Jon Skeet May 25 '12 at 05:33
  • @JonSkeet: If one doesn't do one's interest calculations in such a way as to avoid fractions until one reaches a final divide-to-nearest-penny step, is there any reason to expect power-of-ten fractions to be any more precise than power-of-two fractions? For example, if one computes monthly interest at an APR of 20%, the 1.66667% monthly rate can't really be expressed any better in base-ten than in base-two. If one adds up the number of pennies owed at the end of each day, multiplies by 20 (the APR), and divides by (days in month times 12), then everything but the final computation will be... – supercat May 25 '12 at 16:40
  • ...in whole-number units. If one is using whole-number units, then provided the number of units stays below 2^52, `double` is as precise as anything. If the calculations don't stay with discrete units, binary floating-point will be as accurate as decimal, for any given size mantissa. The `decimal` type has a larger mantissa, but unless one is dealing with national-debt-sized numbers, I don't think that's apt to matter. What matters more than anything is semantically defining at what steps things will be rounded to the nearest penny. For example, ... – supercat May 25 '12 at 16:43
  • ...if a person has two accounts, which accrue $1.037 and $2.736 of interest, is the total $3.77 or $3.78? Whether one uses a `decimal` number of dollars or a `double` number of pennies, the determination of when rounding occurs will have far greater numerical significance than the number of bits used in calculations. – supercat May 25 '12 at 16:49
  • @supercat: As I said before, I don't think this is really suitable for SO comments at this point. – Jon Skeet May 25 '12 at 17:04
4

I think it was done to provide a better internal representation for numeric values retrieved from databases. Dbase engines have a long history of storing numbers in a decimal format (avoiding rounding errors) with an explicit specification for the number of digits in the value.

Compare the SQL Server decimal and numeric column types for example.

Hans Passant
  • 922,412
  • 146
  • 1,693
  • 2,536
  • It's specifically mapping between .NET's decimal type and SQL server's decimal type that can create problems. If you use `decimal(19,6)` in the database, and `decimal` in C#, and then the user enters `0.5M`, when you store it and retrieve it from the database, you'll get back `0.500000`, which is more precision than the user entered. You either have to store precision in a separate field, or impose a set precision on the field for all values. – Scott Whitlock Aug 24 '11 at 11:48
2

Decimals represent fixed-precision decimal values. The literal value 0.50M has the 2 decimal place precision embedded, and so the decimal variable created remembers that it is a 2 decimal place value. Behaviour is entirely by design.

The comparison of the values is an exact numerical equality check on the values, so here, trailing zeroes do not affect the outcome.

David M
  • 71,481
  • 13
  • 158
  • 186
  • 4
    "Fixed-precision" could be misleading here. It's a floating point type, like `float` and `double` - it's just that the point is a decimal point instead of a binary point (and the limits are different). – Jon Skeet Jun 08 '10 at 11:21