I have been struggling with Numba in the sense that whenever I write a function for it, it has a very long warm-up for the first time I use it. I wanted to ask is there a way to prewarm up the JIT function?
For example if I write this function y=1/(log(x+0.1))^2
as a Numba function:
@jit(parallel=True,error_model='numpy')
def f_numba(x_vec):
N=len(x_vec)
res=np.empty(N)
for i in prange(N):
x=x_vec[i]
x=np.log(x+0.1)
res[i]=1/(x*x)
return res
I used this array to test the speed of the function:
N=150000
x_vect=np.random.rand(N)
And for measuring the execution time of the function I used this:
for i in range(5):
start=timer()
f_numba(x_vect)
print('#',timer()-start)
It takes 0.8 seconds for the first run and 0.001 seconds for all subsequent runs. If I could somehow pre-warm-up the JIT function to avoid this latency, it would be great. I tried using the first run with a dummy array with a small size using x_warm=np.random.rand(10)
and then running f_numba(x_warm)
but the warm-up time didn't change at all. Any suggestions?
For completionism, here are the libraries called:
import numpy as np
from timeit import default_timer as timer
from numba import jit, prange
I'm using Jupyter Notebook with Python 3.7.