As described on FFTW's Benchmark Methodology page:
To report FFT performance, we plot the "mflops" of each FFT, which is a scaled version of the speed, defined by:
mflops = 5 N log2(N) / (time for one FFT in microseconds) for complex transforms, and
mflops = 2.5 N log2(N) / (time for one FFT in microseconds) for real transforms
For example, if we look at the raw data file for the "1.06 GHz PowerPC 7447A, MacOSX" case, the first entry is
arprec dcif 4 27.09 1.4765625e-06 9.5e-05
which is for a double-precision complex transform (looking at the first two letter of the dcif
identifier) with N=4
and mflops=27.09
. The minimum average execution time that was measured was then:
5 * 4 * log2(4) / 27.09 = 1.4765 microseconds
Note that this is consistent with the 1.4765625e-06 execution time also shown in that entry.