5

I have a code where one part of calculations is done using NumPy functions and longdoubles and the other using SymPy symbolic differentiation and numerical evaluation, then joined together (to SymPy float). Sympy evaluation can be done with arbitrary precision, but what precision would be just good enough, i.e. would not "pollute" longdoubles results? As far as I understand NumPy longdouble is in fact only 80 bits long, despite being called float128 on my system. Wikipedia says that about 80-bit precision:

Bounds on conversion between decimal and binary for the 80-bit format can be given as follows: if a decimal string with at most 18 significant digits is correctly rounded to an 80-bit IEEE 754 binary floating point value (as on input) then converted back to the same number of significant decimal digits (as for output), then the final string will exactly match the original; while, conversely, if an 80-bit IEEE 754 binary floating point value is correctly converted and (nearest) rounded to a decimal string with at least 21 significant decimal digits then converted back to binary format it will exactly match the original.

Additionally, I dig up in an interactive prompt:

>>> numpy.finfo(numpy.double).precision
15
>>> numpy.dtype(numpy.double).itemsize
8
>>> numpy.finfo(numpy.longdouble).precision
18
>>> numpy.dtype(numpy.longdouble).itemsize
16
>>> 

So, wiki says that precision depends on in which way numbers are converted (either 18 or 21 digits), and Numpy just says it's 18 digits. Interestingly, precision of default double is equal to default SymPy numerical evaluation precision (15 vs. 15).

Assuming I'm converting at one point longdouble result to SymPy float (and then work on SymPy), what SymPy precision should I set? 18 digits? 21? Little more?

I'm using Python 2.7 on Linux 64bit (Sandy Bridge), NumPy 1.6.2, SymPy 0.7.1.rc1. Actual code is here (nsk class around line 130).

arkhebuz
  • 53
  • 4

1 Answers1

1

IIRC, the precision is actually platform dependent. Anyway, to the question: I think you are looking at the wrong details.

>>> print numpy.finfo(numpy.longdouble)
Machine parameters for float128
---------------------------------------------------------------------
precision= 18   resolution= 1e-18
machep=   -63   eps=        1.08420217249e-19
negep =   -64   epsneg=     5.42101086243e-20
minexp=-16382   tiny=       3.36210314311e-4932
maxexp= 16384   max=        1.18973149536e+4932
nexp  =    15   min=        -max
---------------------------------------------------------------------

eps is the smallest positive number fulfilling 1.0 + eps != 1.0, so if your answer is in the order of 1, then you have 18 significant decimals. Due to the nature of floating point arithmetic, this changes with the value of the number itself, but you will always get 18 significant figures (however many decimals that is).

Davidmh
  • 3,797
  • 18
  • 35
  • OK, I understand, but what about SymPy precision? Shouldn't I set it little higher, at those `21` (on my machnine) to minimize conversion errors? Usually intermediate values should not be rounded up to the significant digits during calcs. (ps I know I won't get at the end any more significant figures than I initially had due to that.) – arkhebuz Apr 28 '14 at 11:51
  • 1
    It depends on the number of intermediate steps. Just count them, assume the worst, and propagate the error. – Davidmh Apr 28 '14 at 12:15
  • So I see that in fact both quantities are variable. Longdouble's own precision is platform dependent, and optimal SymPy precision depends on actual calculations being made. – arkhebuz Apr 29 '14 at 13:38
  • Just to make sure we are not chasing wild geese: have you measured the time it takes for the Sympy computation as a function of the precision? Only worry about it if the difference is significant. – Davidmh Apr 29 '14 at 14:04
  • Hey, I was just curious when asking that question. Actually, the script came to life as a "byproduct" of pretty basic physics experiments (as it stands in readme), so every data point I have have in itself error in order of 0.X% - X.X%. ;-) Execution times of cw***.py files (where ncorr_stdev module is imported) are generally negligible for me, no matter what precision. Only downside is broken support of NumPy's longduoble in some NumPy's functions. If I touch this code again I will revert back to Python float for simplicity. – arkhebuz Apr 29 '14 at 17:48