23

Any smart way to convert a float like this:

float f = 711989.98f;

into a decimal (or double) without losing precision?

I've tried:

decimal d = (decimal)f;
decimal d1 = (decimal)(Math.Round(f,2));
decimal d2 = Convert.ToDecimal(f);
Mac
  • 2,312
  • 14
  • 25
Adrian4B
  • 233
  • 1
  • 3
  • 5
  • More details: I'm interfacing with an old webservice that sends this huge object that has some fields as float. When I do the conversion to decimal kaboom...no more pennies ! – Adrian4B Apr 07 '10 at 18:32
  • if this is coming from a web service on the wire it is probably XML, which means no float or decimals - just strings. Look where these strings are converted to "internal" format – mfeingold Apr 07 '10 at 18:52

5 Answers5

18

It's too late, the 8th digit was lost in the compiler. The float type can store only 7 significant digits. You'll have to rewrite the code, assigning to double or decimal will of course solve the problem.

Hans Passant
  • 922,412
  • 146
  • 1,693
  • 2,536
  • thanks I've changed the type of the field on mapped object from float to decimal and solved the conversion issue that was happening during deserialization. – Adrian4B Apr 08 '10 at 01:32
6

This may be a compiler bug because it seems like a valid float should convert directly to a decimal. but it wont without losing resolution. Converting 125.609375 from float to decimal will lose resolution. However, converting it from float to double and then double to decimal will keep resolution.

    float float_val = 125.609375f;

    decimal bad_decimal_val = (decimal)float_val;   //125.6094

    double double_val = (double)float_val;
    decimal good_decimal_val = (decimal)double_val;
John
  • 111
  • 2
  • 2
  • 2
    converting from float to double doesn't magically add in lost decimal places. just use floating point literals without the 'f' at the end (those are doubles), or use decimal literals like 125.609375M. – Christoph Rackwitz Sep 01 '19 at 03:38
2

have you tried?

decimal.TryParse()

http://forums.asp.net/t/1161880.aspx

There are no implicit conversions between float/double and decimal. Implicit numeric conversions are always guaranteed to be without loss of precision or magnitude and will not cause an exception.

hunter
  • 62,308
  • 19
  • 113
  • 113
  • 1
    Implicit conversions are not guaranteed to be without loss of precision. Converting `9007199791611905` to `double` will yield `9007199791611904`. Converting directly to `float` will yield `9007200328482816`. Converting to `double` and then to `float` will yield `9007199254740992` (which is off by more than if the number had been converted directly to `float`). Further, while compilers allow implicit conversion from `float` to `double`, such conversions are far more likely to be erroneous than would be conversions from `double` to `float` which aren't allowed but should be. – supercat Sep 19 '12 at 21:59
  • 1
    @supercat apparently [you can losslessly convert `float` to `double`](http://stackoverflow.com/q/40550861/24874). However converting `double` to `float` is likely to introduce errors, depending upon the specific `double` value in question. – Drew Noakes Nov 18 '16 at 00:16
  • @DrewNoakes: If one has used an iterative computation to compute an object's position using type `double`, and then wishes to draw it using graphics routines that accept `float`, conversion to `float` would lop off precision that would generally be useless even if retained; lopping off the precision would almost certainly be consistent with programmer intent. By contrast, if some code e.g. divides an `int` or `long` by a `float` and stores the result in a `double`, then unless the programmer explicitly cast the result of the division to `float` I would not presume that the programmer... – supercat Nov 18 '16 at 00:51
  • ...actually *intended* that the division should only be performed with `float` precision. Given `int x=16777217; float y=1.0f;` do you think it more likely that a programmer who writes `double z=x/y;` would be expecting `z` to be 16777216.0 or 16777217.0? For that matter, even given `float one=1.0f,ten=10.0f; double d=one/ten;`, do you think a programmer would more likely be wanting `d` to receive the value 0.1f or 0.1 ? – supercat Nov 18 '16 at 00:55
  • @supercat, I see your point. I came to this question looking for information about loss of precision when converting from float/double to decimal (I am considering allowing this conversion in a deserialiser so that schemata can evolve this way), so was focusing primarily on loss of precision during a single conversion. For that my comment holds. Your perspective is interesting and valid, though a pitfall I'm aware of and avoid through careful selection of floating point type. I'm less familiar with decimal, especially wrt conversions from floats. – Drew Noakes Nov 18 '16 at 09:03
  • @DrewNoakes: For accidental precision loss to go unnoticed, there would generally need to be a conversion from `float` to `double`. Otherwise, the fact that one has a `float` would be self-evident. I suspect the reason for Java treating `float` as the more "specific type" was to ensure that when two-argument overloads exist for (float,float) and (double,double), a compiler given (float,double) or (double,float) will choose the latter. Unfortunately, that causes illogical single-argument overload behavior if an "int" or "long" is passed to a function which can take both "float" and "double". – supercat Nov 18 '16 at 15:37
  • @DrewNoakes: If some Java code uses, e.g. `someLong = Math.round(someLong * 1.001);` and it turns out the scaling factor should be 1.0, changing the code to `someLong = Math.round(someLong);` will cause Java to convert the value to `float`, and then to `int`, before finally storing the `long` value. – supercat Nov 18 '16 at 15:40
2

You lost precision the moment you've written 711989.98f.

711989.98 is decimal. With f in the end you are asking the compiler to convert it to float. This conversion cannot be done without losing precision.

What you probably want is decimal d = 711989.98m. This will not lose precision.

Drew Noakes
  • 300,895
  • 165
  • 679
  • 742
mfeingold
  • 7,094
  • 4
  • 37
  • 43
  • 1
    Your example just uses a double, right? It could still lose precision for a different number. Just put an 'm' on the end and make the literal a decimal to start with. – Carl G Jan 10 '13 at 02:51
0

try this

float y = 20;

decimal x = Convert.ToDecimal(y/100);

print("test: " + x);
Saeed Zhiany
  • 2,051
  • 9
  • 30
  • 41
Mirza Setiyono
  • 181
  • 3
  • 10
  • Please read [How do I write a good answer?](https://stackoverflow.com/help/how-to-answer). While this code block may answer the OP's question, this answer would be much more useful if you explain how this code is different from the code in the question, what you've changed, why you've changed it and why that solves the problem without introducing others. – Saeed Zhiany Jul 26 '22 at 03:13