I'd like to learn how to deal with possibility of using multiple CPU cores in audio rendering of a single input parameter array in OSX.
In AudioToolbox, one rendering callback normally lives on a single thread which seemingly gets processed by a single CPU core.
How can one deal with input data overflow on that core, while other 3, 5 or 7 cores staying practically idle?
It is not possible to know in advance how many cores will be available on a particular machine, of course. Is there a way of (statically or dynamically) allocating rendering callbacks to different threads or "threadbare blocks"? Is there a way of precisely synchronising the moment at which various rendering callbacks on their own (highest priority) threads in parallel produce their audio buffers? Can there GCD API perhaps be of any use?
Thanks in advance!
PS. This question is related to another question I have posted a while ago: OSX AudioUnit SMP , with the difference that I now seem to better understand the scope of the problem.