I'm a total noob when it comes to core audio so bear with me. Basically what I want to do is record audio data from a machine's default mic, record until the user decides to stop, and then do some analysis on the entire recording. I've been learning from the book "Learning Core Audio" by Chis Adamson and Kevin Avila (which is an awesome book, found it here: http://www.amazon.com/Learning-Core-Audio-Hands-On-Programming/dp/0321636848/ref=sr_1_1?ie=UTF8&qid=1388956621&sr=8-1&keywords=learning+core+audio ). I see how the AudioQueue works, but I'm not sure how to get data as it's coming from the buffers and store it in a global array.
The biggest problem is that I can't allocate an array a priori because we have no idea how long the user wants to record for. I'm guessing that a global array would have to be passed to the AudioQueue's callback where it would then append data from the latest buffer, however I'm not exactly sure how to do that, or if that's the correct place to be doing so.
If using AudioUnits I'm guessing that I would need to create two audio units, one a remote IO unit to get the microphone data and one generic output audio unit that would do the data appending in the unit's (I'm guessing here, really not sure) AudioUnitRender() function.
If you know where I need to be doing these things or know any resources that could help explain how this works, that would be awesome.
I eventually want to learn how to do this in iOS and Mac OS platforms. For the time being I'm just working in the Mac OS.