Consider the following Python3 snippet: (Python 3.7.7 on mac os catalina)
>>> from decimal import Decimal as d
>>> zero = d('0')
>>> one = d('1')
>>> for q in range(10):
... one.quantize(d('10') ** -q)
...
Decimal('1')
Decimal('1.0')
Decimal('1.00')
Decimal('1.000')
Decimal('1.0000')
Decimal('1.00000')
Decimal('1.000000')
Decimal('1.0000000')
Decimal('1.00000000')
Decimal('1.000000000')
>>> for q in range(10):
... zero.quantize(d('10') ** -q)
...
Decimal('0')
Decimal('0.0')
Decimal('0.00')
Decimal('0.000')
Decimal('0.0000')
Decimal('0.00000')
Decimal('0.000000')
Decimal('0E-7')
Decimal('0E-8')
Decimal('0E-9')
Why does quantize with zero change to E notation at this point? Why is it inconsistent with other numbers? And how can I control it?
Note: I get exactly the same inconsistency if I use use the built-in round
function instead of quantize
, which leads me to guess that round
calls quantize
when it gets a Decimal.
Since I want strings with trailing zeros, the best workaround I can think of is to write my own _round()
function:
def _round(s, n):
if decimal.Decimal(s).is_zero():
return '0.' + '0' * n
return decimal.Decimal(s).quantize(decimal.Decimal(10) ** -n)
but that seems a bit lame. And anyway I'd like to understand why Decimal.quantize
behaves like this.