-1

For one of my projects, I need to repeatedly evaluate an expression involving the general hypergeometric function. While SciPy does not support the general HypGeo function, MPMath does. However, using mp.hyper(..) is very time consuming. So instead I decided to use their fast precision library function fp.hyper(..). Unfortunately the behaviour seems to be completely different. My example below:

from mpmath import mp, fp
from math import sin, cos, pi

H = 0.2
k = 2

A = 4 * sqrt(H) / (1 + 2 * H)
B = 4 * pi / (3 + 2 * H)
C = H/2 + 3/4


f_high = lambda t: (B * k * t * sin(pi * k) *
                   mp.hyper([1], [C+1/2, C+1], -(k*pi*t)**2) +
                   cos(pi * k) * mp.hyper([1], [C, C + 1/2],
                   -(k*pi*t)**2)) * A * t**(H + 1/2)


f_low = lambda t: (B * k * t * sin(pi * k) *
                   fp.hyper([1], [C+1/2, C+1], -(k*pi*t)**2) +
                   cos(pi * k) * fp.hyper([1], [C, C + 1/2],
                   -(k*pi*t)**2)) * A * t**(H + 1/2)

Obtained using fp.plot(f_high,[0,5

Obtained through fp.plot(f_low,[0,5

The first plot shows fp.plot(f_high,[0,1]), the second one fp.plot(f_low,[0,1]). In case anyone is wondering: The functions look ugly but one is a copy of the other, just mp replaced by fp, so there is no chance they differ in any other way.

I also plotted it in Mathematica and the picture is more like the upper one (high precision).

Looks like there is a mistake with the implementation of the fp.hyper function, right?

Ben
  • 465
  • 2
  • 6
  • 17

1 Answers1

1

The documentation for fp says (emphasis added):

Due to intermediate rounding and cancellation errors, results computed with fp arithmetic may be much less accurate than those computed with mp using an equivalent precision (mp.prec = 53), since the latter often uses increased internal precision. The accuracy is highly problem-dependent: for some functions, fp almost always gives 14-15 correct digits; for others, results can be accurate to only 2-3 digits or even completely wrong. The recommended use for fp is therefore to speed up large-scale computations where accuracy can be verified in advance on a subset of the input set, or where results can be verified afterwards.

If fp were just a drop-in replacement for mp with faster computation and no downside, there would be no reason for mp to exist. In this case, it appears that fp is not suited for your task, so you'll have to use mp.

BrenBarn
  • 242,874
  • 37
  • 412
  • 384
  • Of course it is clear that there is a tradeoff between precision and computation time but the fp.implementation shows a completely different behaviour? – Ben May 27 '16 at 19:17
  • 1
    I don't know what to tell you except what the documentation already says. Note where it says "or even completely wrong". Presumably there is a loss of precision at some crucial point in your computation that results in "completely wrong" results. – BrenBarn May 27 '16 at 19:19
  • 1
    @sbm: There is a tradeoff. You have traded away essentially *all* of your precision. You were warned that this could happen. – user2357112 May 27 '16 at 19:22