One of my coworkers just made me notice a "strange" behavior with decimals literals and the ToString()
method.
Consider this sample code.
Console.WriteLine(10M);
Console.WriteLine(10M.ToString());
Console.WriteLine(10.00M);
Console.WriteLine(10.00M.ToString());
The output is:
10
10
10.00
10.00
This somehow caught me "off-hand", as I was not expecting this. I though every WriteLine call would just print out "10".
After some time spent thinking about this, I concluded that the only way the framework could "store" the information about how many decimals the value was defined with had to be linked to the internal mantissa/exponent representation the number had.
And Skeet seems to confirm this.
Taken from: http://csharpindepth.com/Articles/General/Decimal.aspx
Between .NET 1.0 and 1.1, the decimal type underwent a subtle change. Consider the following simple program:
(CUT)
When I first ran the above (or something similar) I expected it to output just 1 (which is what it would have been on .NET 1.0) - but in fact, the output was 1.00. The decimal type doesn't normalize itself - it remembers how many decimal digits it has (by maintaining the exponent where possible) and on formatting, zero may be counted as a significant decimal digit.
So this explain how this is possible and when it has been originated. My question now is "why".
Any clue why this behavior was specifically changed to work this way? It is just to improve performance by removing the need for normalization? To me it seems a weird choice - it means that the same number (albeit with different representations) can produce different results when passed to the ToString method, depending on how it was created, but that doesn't mean there can't be some more valid argument for having it work this way. Anyone can provide me some insight on this choice?
Why the creators of the framework considered the current implementation better than the original one?