1

May be this is a very basic question, but I am really interested to know what really happens.

For example if we do the following in c#:

object obj = "330.1500249000119";
var val = Convert.ToDouble(obj);

The val becomes: 330.15002490001189

The question is that why the last 9 is replace by 89? Can we stop it from happening this way? And is this precision dependent on the Current Culture?

Austin Salonen
  • 49,173
  • 15
  • 109
  • 139
Baig
  • 1,489
  • 1
  • 25
  • 41
  • imho the first var is a string/decimal very precise but slower calculation. The second var is a double very fast calculation but not all values can be coded, 330.15002490001189 is certainly the nearest double coded value for 330.1500249000119. – tschmit007 Sep 13 '12 at 14:59

4 Answers4

4

This has nothing to do with culture. Some numbers can not be exactly represented by a base-2 number, just like in base-10 1/3rd can't be exactly represented by .3333333

Note that in your specific case you are putting in more digits than the data type allows: the significant digits available with a Double is 15-16 (depending on range), which your number goes beyond.

Instead of a Double, you can use a Decimal in this case:

object obj = "330.1500249000119";
var val = Convert.ToDecimal(obj);
Philip Rieck
  • 32,368
  • 11
  • 87
  • 99
  • 1
    3/10th is exactly .3 in decimal. Perhaps you meant 1/3? – Chris Dunaway Sep 13 '12 at 15:02
  • @PhilipRieck - alright now the val contains exactly the same digits as the original obj. But, now if we cast val to double it again does the same thing i.e; replaces last 9 to 89. Problem is that I need the exact same digits in double, is that possible? I marked it answer since it answers my original question :) – Baig Sep 14 '12 at 06:19
  • @Baig - No, you cannot represent that number in a `Double` (in .net), ever. – Philip Rieck Sep 14 '12 at 17:50
2

A decimal would retain the precision.

object obj = "330.1500249000119";
var val = Convert.ToDecimal(obj);

The "issue" you are having is floating point representation.

http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

Austin Salonen
  • 49,173
  • 15
  • 109
  • 139
0

No, you can't stop it from happening. You are parsing a value that has more digits that the data type can represent.

The precision is not dependent of the culture. A double always has the same precision.

So, if you don't want it to happen, then simply don't do it. If you don't want the effects of the limited precision of floating point numbers, don't use floating point numbers. If you would use a fixed point number (Decimal) instead, it could represent the value exactly.

Guffa
  • 687,336
  • 108
  • 737
  • 1,005
  • Why the downvote? If you don't explain what you think is wrong, it can't improve the answer. – Guffa Sep 13 '12 at 14:58
  • I'm not the downvoter, but I think you response is inexact. It is not a problem of digit, but a problem of encoding: the exact value can't be coded as a double, so the nearest value is coded. – tschmit007 Sep 13 '12 at 15:01
  • @Guffa but the resulting double has *more* digits than the input string. – Graham Clark Sep 13 '12 at 15:04
  • 1
    @GrahamClark Remember that it's storing base two digits, not base ten. A fixed number of digits in base two won't always come out to the same number of digits in base ten (and vice versa). – Servy Sep 13 '12 at 15:12
  • 1
    @GrahamClark: The string is parsed and the value is rounded to the number of bits that will fit in the double. When it is displayed, those bits will be converted to decimal format and rounded to a specific number of digits. Just because that happens to be more digits than the original string doesn't mean that it represents more information. To show the exact value stored in the double in decimal form you would need a lot more digits, but the precision is still only about 15 decimal digits. – Guffa Sep 13 '12 at 15:20
-1

A CPU represents doubles in 8 bytes. Which is divided into 1 sign bit, 11 bits for the exponent ("the range") and 52 for the mantissa ("the precision"). You have limited range and precision.

The C constant DBL_DIG in <float.h> tells you that such a double can only represent 15 digits precisely, not more. But this number entirely dependent on your c library and CPU.

330.1500249000119 contains 18 digits, so it will be rounded to 330.150024900012. 330.15002490001189 is only one off, which is good. Normally you should expect 1.189 vs 1.2.

For the exact mathematics behind try to read David Goldberg, “What Every Computer Scientist Should Know About Floating-point Arithmetic,” ACM Computing Surveys 23, 1 (1991-03), 5-48. This is worth reading if you are interested in the details, but it does require a background in computer science. http://www.validlab.com/goldberg/paper.pdf

You can stop this from happening by using better floating point types, like long double or __float128, or using a better cpu, like a Sparc64 or s390 which use 41 digits (__float128) natively in HW as long double.

Yes, using an UltraSparc/Niagara or an IBM S390 is culture.

The usual answer is: use long double, dude. Which gives you two more bytes on Intel (18 digits) and several more an powerpc (31 digits), and 41 on sparc64/s390.

rurban
  • 4,025
  • 24
  • 27