I need to make computations in the highest possible precision, regardless if the arguments passed are integers, floats or whatever numbers. One way I can think of this is:
import numpy as np
def foo(x, y, z)
a = (np.float64)0
a = x + y * z
I can see a couple of problems with this: 1) I think I need to convert the inputs, not the result for this to work 2)looks ugly (the first operation is a superfluous C-style declaration).
How can I pythonically perform all calculations in the highest available precision, and then store the results in the highest available precision (which is IMO numpy.float64)?