While trying to solve a problem, I came across the following limitation: numpy
doesn't seem to have arbitrary integer precision!
Here is what I did:
import numpy as np
data = list(range(1, 21))
# ------- Regular Python --------
prod = 1
for el in data:
prod*=el
print(prod)
# ------- NumPy --------
parr = np.prod(np.array(data, np.int64))
print(parr)
The results obtained are as follows:
2432902008176640000
2432902008176640000
So far, so good. Now, if we increase upper limit of the range
from 21
to just 22
, we can observe the limitation; the results will now be:
51090942171709440000
-4249290049419214848
My questions are
- why doesn't NumPy inherit the arbitrary integer precision from Python?
- why do we have this limitation? (or, am I making some mistake?)
- how to overcome it?
Update
The possible duplicate question suggests using a dtype=object
, but, that takes away all numpy performance benefits for numeric data!