0

I am working on a project related to gravitational lensing, for which I need to evaluate the confluent hypergeometric function 1F1(a,b,z) for an array z of length ~ 10^8 complex points, a = 1+0.48j and b = 1. I am looking for an efficient way to evaluate this on large array sizes. The scipy implementation is fast but does not accept complex arguments for a and b.

mpmath seems to be the best way to calculate 1F1 for complex parameters but mpmath.hyp1f1 does not accept array values. The best workaround I found for this was to use np.vectorize or np.frompyfunc to allow passing a NumPy array as a parameter. However, this is extremely slow and would take days to execute (even with gmpy2 installed). I assume this is because mpmath functions are always slow on large array sizes.

a nonpython implementation would be fine as well, as long as I can somehow save the result on disk and read it into my python code. I have seen some implementations (for example https://www.math.ucla.edu/~mason/research/pearson_final.pdf) which could possibly work but I'm not sure.

Another possible way would be to interpolate the function (consecutive points in my input array are extremely close) but I'm not sure what would be the best way to do that.

Thanks!

AstroNerd
  • 17
  • 1
  • 1
    Naive question, but could you just break the "SciPy implementation" into real and imaginary parts and evaluate it as a function of the form `a+ib`? This way the input are all real numbers, while the output can be recombined into a complex array. – t.o. Jul 26 '22 at 19:04
  • 1
    What is maximum magnitude of the 10^8 complex points? – Warren Weckesser Jul 26 '22 at 19:17

1 Answers1

0

I was having a very similar problem than you have.

I figured out that the mpmath package has a "hidden" set of function with (only) float precision, which one can access by writing fp. upfront. This does not exist for hyp1f1 but for the more general hyper. Meaning there is a fp.hyper in the mpmath package which is with fp.hyper([a],[b],z) equivalent to hyper1f1(a,b,z), but is a lot faster. If you vectorize this with np.vectorize this should make your calculation substansially faster.

Disclaimer: I got an error message saying that some complex value is converted to real by dropping the imaginary part when evaluating this, but so far the results i have gotten seem sensible and compatible to the hyper1f1(a,b,z) values.

Added: It seems that fp.hyper does not like getting numpy datatypes even if they are scalars, as in the case of a,b,z beeing numpy scalars (for example one element of an numpy array) it will simply return 1 without giving an error message independent of the actual input. If you use np.vectorize however everything should be fine.

Eitherway: Use at own risc.

Frederix96
  • 31
  • 3