1

One of my coworkers just made me notice a "strange" behavior with decimals literals and the ToString() method.

Consider this sample code.

Console.WriteLine(10M);
Console.WriteLine(10M.ToString());

Console.WriteLine(10.00M);
Console.WriteLine(10.00M.ToString());

The output is:

10
10
10.00
10.00

This somehow caught me "off-hand", as I was not expecting this. I though every WriteLine call would just print out "10".

After some time spent thinking about this, I concluded that the only way the framework could "store" the information about how many decimals the value was defined with had to be linked to the internal mantissa/exponent representation the number had.

And Skeet seems to confirm this.

Taken from: http://csharpindepth.com/Articles/General/Decimal.aspx

Between .NET 1.0 and 1.1, the decimal type underwent a subtle change. Consider the following simple program:

(CUT)

When I first ran the above (or something similar) I expected it to output just 1 (which is what it would have been on .NET 1.0) - but in fact, the output was 1.00. The decimal type doesn't normalize itself - it remembers how many decimal digits it has (by maintaining the exponent where possible) and on formatting, zero may be counted as a significant decimal digit.

So this explain how this is possible and when it has been originated. My question now is "why".

Any clue why this behavior was specifically changed to work this way? It is just to improve performance by removing the need for normalization? To me it seems a weird choice - it means that the same number (albeit with different representations) can produce different results when passed to the ToString method, depending on how it was created, but that doesn't mean there can't be some more valid argument for having it work this way. Anyone can provide me some insight on this choice?

Why the creators of the framework considered the current implementation better than the original one?

SPArcheon
  • 1,273
  • 1
  • 19
  • 36
  • 2
    presumably (totally speculating here) this is to preserve the stated precision, since downstream code may make decisions (such as here with display, but it could be in other ways) based on this? If I say a value is `12.00`, that implicitly suggests a higher level of precision than if I say it is `12`, even if *numerically* they are identical – Marc Gravell May 09 '18 at 11:37
  • `decimal` is a special type, it has no implicit conversion with other floating-point type. It's just wrong to expect it to behave similar to anything else: to be normalized, to remove non-significant fractions in `ToString()`, etc. Don't do such assumptions and if you did - simply learn what is wrong. – Sinatr May 09 '18 at 11:52
  • @Sinatr If you read the link I posted - or just the blockquote, you will notice that they changed the way the decimal type worked from .NET 1.0 to .NET 1.1. I don't think that was done randomly just to annoy people, so they must have had a valid reason for making that choice. It is fine if you don't know it or if you just mean that I shouldn't be overthinking this, but the "simply learn what is wrong" line does feel a little rude/harsh. – SPArcheon May 09 '18 at 11:59
  • Also, I could argue that **I have already learned that my assumption were wrong*. That is not the question. The question is *why they changed their implementation from .NET 1.0 to .NET 1.1* Sorry if that wasn't clear. – SPArcheon May 09 '18 at 12:00
  • @MarcGravell that could be a pretty realistic reason. – SPArcheon May 09 '18 at 12:32
  • thanks for the finding the dupe, hdv. That said, the more I think about it the more that seems a weird answer. Either they really meant that but the related arithmetic is somehow oddly managed (0.00M * 0.00M => 0.0000M ???!?) or that can't be the root cause. – SPArcheon May 09 '18 at 14:12
  • I don't see that this is a duplicate of the specified question. SPArchaeologist is asking why the change was made to the standard not what the change does. And I don't see that answered anywhere in the linked "duplicate". – BernieP May 09 '18 at 16:42
  • 1
    @user1074069 I can live with the dupe closure (even if I still think that the gold 1 man hammer is a nonsensical feature). That said, I still stand this: IF the reason for the change was what Skeet suggests in the other post - manage significant figures - the current implementation makes no sense at all, to the point I sincerely doubt that was the intended use case. Therefore, I will still search for an alternative explanation or an **official** confirmation of Skeet theory. Somewhere else, I guess. – SPArcheon May 10 '18 at 07:35

0 Answers0