I have been trying to work with Cython and I encountered the following peculiar scenario where a sum function over an array takes 3 times the amount of time that the average of an array takes.
Here are my three functions
cpdef FLOAT_t cython_sum(cnp.ndarray[FLOAT_t, ndim=1] A):
cdef double [:] x = A
cdef double sum = 0
cdef unsigned int N = A.shape[0]
for i in xrange(N):
sum += x[i]
return sum
cpdef FLOAT_t cython_avg(cnp.ndarray[FLOAT_t, ndim=1] A):
cdef double [:] x = A
cdef double sum = 0
cdef unsigned int N = A.shape[0]
for i in xrange(N):
sum += x[i]
return sum/N
cpdef FLOAT_t cython_silly_avg(cnp.ndarray[FLOAT_t, ndim=1] A):
cdef unsigned int N = A.shape[0]
return cython_avg(A)*N
Here are the run times in ipython
In [7]: A = np.random.random(1000000)
In [8]: %timeit np.sum(A)
1000 loops, best of 3: 906 us per loop
In [9]: %timeit np.mean(A)
1000 loops, best of 3: 919 us per loop
In [10]: %timeit cython_avg(A)
1000 loops, best of 3: 896 us per loop
In [11]: %timeit cython_sum(A)
100 loops, best of 3: 2.72 ms per loop
In [12]: %timeit cython_silly_avg(A)
1000 loops, best of 3: 862 us per loop
I am unable to account for the memory jump in simple cython_sum. Is it because of some memory allocation? Since these are random nos from 0 to 1. The sum is around 500K.
Since line_profiler doesn't work with cython, I was unable to profile my code.