I am working on a voice pitch identification problem in iOS, For analysis purpose i was using python it gave me appropriate results.... But when I tried to recreate the same thing in iOS using accelerate framework
it is giving incorrect or weird results. Can somebody please help me on this.
I want to perform autocorrelation
using FFT convolution
, which happens very well in python using scipy.signal.fftconvolve
. but when I am trying to do the same using vDSP_conv
it is giving incorrect results.
It will be a great help if somebody with experience or knowledge on this can guide me on this or explain how fftconvolve works. Thanks in advance.