I want to process audio offline on iOS, but have a query regarding memory usage. If I use AVAssetReader to decompress an MP3 to raw PCM data, the memory footprint would be huge. So how would I go about processing (offline FFT) an mp3 file if decompression would lead to the app using too much memory? I assume I have to stream it somehow, but I don't know how to do this in iOS.
Asked
Active
Viewed 189 times
1 Answers
0
AVAssetReader can write to a file using AVAssetWriter.
To get PCM, you can write a WAV file format, and then skip the RIFF header(s) on read. Then you only need to pull in as much data from the WAV file into memory at any one time as your FFT length requires. This should only cause a memory footprint problem if each FFT is well over 1 million samples long.
You can use C/unix posix calls (fgetc, etc.) to read a file stream under iOS. Or read from an NSInputStream into NSData.

hotpaw2
- 70,107
- 14
- 90
- 153
-
So, to clarify, I should first perform a conversion and write it to a temporary file on disk. If I then read in a windowed number of samples per loop (e.g. 1024), where do I store all the FFT data without it getting overwritten? I assume I need an external buffer since Accelerate FFT performs an in-place function, but how do I calculate how big this external buffer will be since I don't know in advanced? Would it simply be 44100 * duration of songs in seconds? Thanks for the reply. – Skoder Sep 25 '11 at 18:34
-
This probably depends on what you want to do with the FFT results. – hotpaw2 Sep 25 '11 at 18:45
-
I want to create a spectrum visualizer but not in real time. i.e. I want to pre-process the entire thing. – Skoder Sep 25 '11 at 19:04