[edit 2021-09-26]
sorry!, I have to admit that I asked crap here, explanation follows. I don't think I should post this as an 'answer', so as an edit:
I'm still curious how a 'double' value of 0.1
converts to a long double!
But the focus of the question was that a spreadsheet program that calculates with 'doubles' stores values in such a way that a program that calculates with better precision reads them in incorrectly. I have now - only now, me blind :-( - understood that it does NOT! store a 'double' binary value, but a string!
And in this gnumeric makes one of the very few mistakes that program makes, it goes with fixed string lengths and stores '0.1'
as
'0.10000000000000001'
, rounded up from
'0.10000000000000000555xx'
. LO Calc and Excel store - I think better - the shortest string that survives a roundtrip 'bin -> dec -> bin' unharmed, namely '0.1'
. And this works also as interchange to programs with better precision.
So this question is cleared, the problem is not 'solved', but I can work around it.
still curious: would, and if yes with which steps will a double:
0 01111111011 (1).1001100110011001100110011001100110011001100110011010
be converted to a (80-bit) long double:
0 011111111111011 1.10011001100110011001100110011001100110011001100110**10** **00000000000**
or if, and if with which (other) steps it could be made to:
0 011111111111011 1.10011001100110011001100110011001100110011001100110**01** **10011001101**
[/edit]
original question:
Bear with me, this question(s) must be old, but I didn't yet find an answer ... me blind?,
The question in short:
Is there any CPU, FPU switch, command, macro, library, trick or optimized standard code snippet which doe's: 'Converting a double to a long double value (having better precision!) and keep the corresponding 'decimal value'! rather than the 'exact but deviating' 'bit value'?
[edit 2021-09-23]
i found something which might do the job, can anyone propose how to 'install' that and which functions inside to 'call' to use it in other programs (debian linux system)?
Ulf (ulfjack) Adams announced a solution for such problems (for printouts?) in his 'ryu' project 'https://github.com/ulfjack/ryu'. he commented:
'## Ryu
Ryu generates the shortest decimal representation of a floating point number that maintains round-trip safety. That is, a correct parser can recover the exact original number. For example, consider the binary 32-bit floating point number 00111110100110011001100110011010
. The stored value is exactly
0.300000011920928955078125
. However, this floating point number is also the closest number to the decimal number 0.3
, so that is what Ryu outputs.'
(IMHO it should read 'the closest IEEE float number to')
he announced the algo as 'being fast' also, but may be 'fast' compared to other algos computing 'shortest' is not the same as 'fast' compared to computing a fixed length string?
[/edit]
Let's say I have a spreadsheet, and that has stored values in double format, among them values which deviate from their decimal correspondent due to 'not exactly representable in binaries'.
E.g. '0.1'
, I might have keyed it in as '0.1'
or given a formula '=1/10'
, the stored 'value' as 'double' will be the same:
0 01111111011 (1).1001100110011001100110011001100110011001100110011010
which is appr.
0.10000000000000000555112~
in decimal.
Now I have tuned my spreadsheet program a little, it now can work with 'long doubles'. (I really! did that, it's gnumeric, don't try such with MS Excel or LibreOffice Calc!). 80 bit format on my system as well as on most Intel hardware (1 bit sign, 15 bit exponent, 64 bit mantissa with the leading '1' from normalization stored in the bits! (not 'implicit' and 'left of' as in 'doubles')).
In a new sheet I can happily key in either '0.1' or '=1/10'
and get (estimated, couldn't test):
0 011111111111011 1.100110011001100110011001100110011001100110011001100110011001101
being
0.100000000000000000001355253~
in decimals, fine :-)
If I open my 'old' file the 'formula'! will be reinterpreted and show the more precise value, but the 'value'!, the '0,1'!
, is not! re-interpreted. Instead - IMHO - the bits from the double value are put into the long structure, build a mantissa like
1.1001100110011001100110011001100110011001100110011010**00000000000**
fully preserving the round-on error from decimal -> binary(double) conversion, producing as decimal representation again:
0.10000000000000000555112~
[edit 2021-09-23]
not finally dived into ... looks as if in some cases the store and read works with strings, sometimes 'longer strings' getting the 00555112~
back, and in other situations stores a rounded string 0,10000000000000001
and the 'long' version generates 0,100000000000000010003120
when loading, even worse.
[/edit]
As said in the subject it's an ambiguous situation, one can either exactly preserve the value given by the double bits, or! interpret it as a 'rounded placeholder' and try to get it's 'originally intended decimal value' back, but not both together. I am playing with 'keep decimal value', can! do such e.g. by specific rounding, but that's complex and costly - in terms of computation effort.
As I have seen the IEEE, CPU and library developers as high skilled persons in the last weeks, having wisely foreseen and implemented solutions for similar problems:
Is there any 'standard' method, CPU, FPU or compiler switch, or optimized code snippet doing such?
Converting a double to a long double value (having better precision!) and keeping the corresponding decimal value instead of the deviating 'bit value'?
If 'no', has anyone delved deeper into such issue and has any good tips for me?
best regards
,
b.