I have the following code which does a normalized cross correlation looking for similarities in two signals in python:
def normcorr(template,srchspace):
template=(template-np.mean(template))/(np.std(template)*len(template)) # Normalize template
CCnorm=srchspace.copy()
CCnorm=CCnorm[np.shape(template)[0]:] # trim CC matrix
for a in range(len(CCnorm)):
s=srchspace[a:a+np.shape(template)[0]]
sp=(s-np.mean(s))/np.std(s)
CCnorm[a]=numpy.sum(numpy.multiply(template,sp))
return CCnorm
but as you can imagine it is far too slow. Looking at the cython documentation, large increases in speed are promised when performing loops in raw python. So I attempted to write some cython code with data typing of the variables that looks like this:
from __future__ import division
import numpy as np
import math as m
cimport numpy as np
cimport cython
def normcorr(np.ndarray[np.float32_t, ndim=1] template,np.ndarray[np.float32_t, ndim=1] srchspace):
cdef int a
cdef np.ndarray[np.float32_t, ndim=1] s
cdef np.ndarray[np.float32_t, ndim=1] sp
cdef np.ndarray[np.float32_t, ndim=1] CCnorm
template=(template-np.mean(template))/(np.std(template)*len(template))
CCnorm=srchspace.copy()
CCnorm=CCnorm[len(template):]
for a in range(len(CCnorm)):
s=srchspace[a:a+len(template)]
sp=(s-np.mean(s))/np.std(s)
CCnorm[a]=np.sum(np.multiply(template,sp))
return CCnorm
but once I compile it the code actually runs slower than the pure python code. I found here (How to call numpy/scipy C functions from Cython directly, without Python call overhead?) that calling numpy from cython might significantly slow down the code, is this the issue for my code, in which case I have to define inline functions to replace all calls to np, or is there something else I am doing wrong that I am missing?