I'm reading a JSON data file that might give me a float value of, say, 1.1. When I make a Decimal
of that value, I get a crazy long number, because of the imprecision of binary representations of floats.
I understand binary representation and I'm okay with the idea that numbers writable in base-ten floating point can't always be represented in base-two.
But it seems that if I string-ify the float first, and make a Decimal using that string as the value, I get a Decimal without the tiny binary-imprecision delta.
Here's what I mean:
Python 2.7.6 (default, Jan 16 2014, 10:55:32)
>>> from decimal import Decimal
>>> f = 1.1
>>> d = Decimal(f)
>>> f
1.1
>>> d
Decimal('1.100000000000000088817841970012523233890533447265625')
>>> d = Decimal(str(f))
>>> d
Decimal('1.1')
String-ifying the float before making it into a Decimal seems to give me a result closer to the original base-ten number as typed (or as read in from a JSON file).
So here's my question(s): when stringifying the float, why don't I see the long tail of digits? Is python automagically keeping track of the original string parsed in from the JSON, or something? Why doesn't the Decimal constructor use that trick too?