I've found lots of examples online for working with audio in iOS, but most of them are pretty outdated and don't apply to what I'm trying to accomplish. Here's my project:
I need to capture audio samples from two sources - microphone input and stored audio files. I need to perform FFT on these samples to produce a "fingerprint" for the entire clip, as well as apply some additional filters. The ultimate goal is to build a sort of song-recognition software similar to Shazam, etc.
What is the best way to capture the individual audio samples in iOS 8 for performing a Fast Fourier Transform? I imagine ending up with a large array of them, but I suspect that it might not work quite like that. Secondly, how can I use the Accelerate framework for processing the audio? It seems to be the most efficient way to perform complex analysis on audio in iOS.
All the examples I've seen online are using older versions of iOS and Objective-C, and I haven't been able to successfully translate them into Swift. Does iOS 8 provide some new frameworks for this sort of thing?