Currently IIRC, the current approach for displaying floating point numbers is to show them as 1/2 + 1/4 + 1/8 ...
. However what if we changed our approach to floating point numbers such that any floating point number is actually a normal integer, padded back by a series of 0's. Each number would would have to be larger, similar to the 62bit double.
For the 62bit double, we have 11 bits reserved for the exponent and 53 bits for the actual number. Now, what we could instead do is have one number represent the amount of "zeros" we have it padded back by. In this example we could have 11 as the padding bits, that mean we have (2 ^ 11) - 1
digits of accuracy for a 53 bit number.
Suppose I want to display 0.4
, currently in Python we know 0.4
has floating point issues, for example,
>>> import decimal
>>> decimal.Decimal(0.4)
Decimal('0.40000000000000002220446049250313080847263336181640625')
However with my encoding, this will not happen, why? Because I can represent the number 4
with traditional binary 100
and the amount of exceeding zeros as the binary number 1
, 01
. This means I can represent the number 0.4
without any floating point issues by the number,
0 00000000001 00000000000000000000000000000000000000000000000000100
First bit reserved for sign, next 11
for zero padding and 53
for the number. It requires more bits, but I can now represent a number up to 2 ^ 11
digits of length with accuracy. Not only this, the wikipedia page suggest the C++ double is only 16
digits accurate, which means mine is 2048 - 16
digits more accurate!