I'm using Matt Gallagher's AudioStreamer to play a mp3 audio stream. Now I want to do FFT in realtime and visualize the frequencies using OpenGL ES on the iPhone.
I'm wondering where to catch the audio data and pass it to my "Super-Fancy-FFT-Computing-3D-Visualization-Method". Matt is using the AudioQueue Framework and there is a Callback function that is set with:
err = AudioQueueNewOutput(&asbd, ASAudioQueueOutputCallback, self, NULL, NULL, 0, &audioQueue);
The Callback looks like this:
static void ASAudioQueueOutputCallback(void* inClientData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer){...}
In the moment I'm passing the data from the AudioQueueBufferRef and the result looks very weird. But with FFT and visualizations there are so many points where you can screw it up that I wanted to be sure to pass at least the right data to the FFT. I'm reading the data from the Buffer this way ignoring every second value because I only want to analyze one channel:
SInt32* buffPointer = (SInt32*)inBuffer->mAudioData;
int count = 0;
for (int i = 0; i < inBuffer->mAudioDataByteSize/2; i++) {
myBuffer[i] = buffPointer[count];
count += 2;
}
Then follows FFT computing with myBuffer containing 512 values.