I noticed something peculiar: for non-trivial exponents (so not 1, 2, and 0.5), numpy's power
function seems to be slower than calculating the same thing through logarithms.
Consider the following timing code:
import timeit
import numpy as np
two_thirds = (2.0 / 3.0)
def simple(x):
return x**two_thirds
def numpy_power(x):
return np.power(x, two_thirds)
def numpy_exp_log(x):
return np.exp(np.log(x) * two_thirds)
arr = np.random.rand(100)
min(timeit.Timer("simple(arr)", setup="from __main__ import arr, simple, two_thirds").repeat(10, 10000))
min(timeit.Timer("numpy_power(arr)", setup="from __main__ import arr, numpy_power, two_thirds").repeat(10, 10000))
min(timeit.Timer("numpy_exp_log(arr)", setup="from __main__ import arr, numpy_exp_log, two_thirds").repeat(10, 10000))
According to these timings, my numpy_exp_log
function takes only about 65% of the time of the other two functions. They return the same values (modulo floating point rounding errors, which don't matter all that much to me). This seems really peculiar to me.
Is it feasible that my function is only faster than numpy's power
on my hardware, and other hardware might not display such a difference? How much of the computation is shipped off to hardware-specific instructions? Can I expect this difference to occur on pretty much any computer running the same version of Python/Numpy? (Python 3.6.5, Numpy 1.16.1)