I have a csv file that I read in (using python 3 on a Jupyter notebook but get the same results from terminal). I am computing the fft via numpy.fft.fft module and am getting the strange result that the fft of the data returns back the original data - i.e a complex vector with real part exactly equal to the (real) input data and the imaginary part identically equal to 0. The code is shown below:
with open('/Users/amacrae/Documents/PMDi/MCT/Jan10/msin287.csv', 'r') as f:
c = csv.reader(f)
y = np.array(list(c),dtype=float)
YF = np.fft.fft(y)
print(np.sum(YF.real-y))
print(np.sum(YF.imag))
> 0.0
> 0.0
To ensure that it's not just the data, I plotted the identical data in matlab with the correct results (the data is designed so that the magnitude of the fft is constant over a window in frequency space and has a real iffy.) The corresponding matlab code is:
y = csvread('/Users/amacrae/Documents/PMDi/MCT/Jan10/msin287.csv');
plot(abs(fft(y)))
As far as I can tell the results should be the same in either language ... the real parts of the imported data match in both cases (same length and values) but the fft's do not. The data are quite long - 100,000 samples, but if I create a random 100,000 sample array in python I get a real + imaginary fft. Does anyone have an idea what might be causing this?