0

I'd like to learn how to deal with possibility of using multiple CPU cores in audio rendering of a single input parameter array in OSX.

In AudioToolbox, one rendering callback normally lives on a single thread which seemingly gets processed by a single CPU core.

How can one deal with input data overflow on that core, while other 3, 5 or 7 cores staying practically idle?

It is not possible to know in advance how many cores will be available on a particular machine, of course. Is there a way of (statically or dynamically) allocating rendering callbacks to different threads or "threadbare blocks"? Is there a way of precisely synchronising the moment at which various rendering callbacks on their own (highest priority) threads in parallel produce their audio buffers? Can there GCD API perhaps be of any use?

Thanks in advance!

PS. This question is related to another question I have posted a while ago: OSX AudioUnit SMP , with the difference that I now seem to better understand the scope of the problem.

Community
  • 1
  • 1
user3078414
  • 1,942
  • 2
  • 16
  • 24
  • 1
    This is a good question but I don't think there is an answer. I'm not aware of any machinery on OS X to allow a thread to run on multiple cores, or of a way to affect the number of rendering threads. – sbooth Jan 11 '15 at 02:53

2 Answers2

1

No matter how you set up your audio processing on macOS – be it just writing a single render callback, or setting up a whole application suite – CoreAudio will always provide you with just one single realtime audio thread. This thread runs with the highest priority there is, and thus is the only way the system can give you at least some guarantees about processing time and such.

If you really need to distribute load over multiple CPU cores, you need to create your own threads manually, and share sample and timing data across them. However, you will not be able to create a thread with the same priority as the system's audio thread, so your additional threads should be considered much "slower" than your audio thread, which means you might have to wait on your audio thread for some other thread(s) longer than you have time available, which then results in an audible glitch.

Long story short, the most crucial part is to design the actual processing algorithm carefully, as in all scenarios you really need to know what task can take how long.


EDIT: My previous answer here was quite different and uneducated. I updated the above parts for anybody coming across this answer in the future, to not be guided in the wrong direction.
You can find the previous version in the history of this answer.

max
  • 1,509
  • 1
  • 19
  • 24
  • Thanks for your reply. I'm porting an old SGI/IRIX application of mine, and research data _sonification_ is a small but vital part of it. I'm trying my best to use as little proprietary Apple code as really needed. If you claim that splitting the task to two or four parallel "cloned" AudioUnits can do the work any better than single one, I'll go for it and see what happens, although it complicates things **a lot**. The other strategy of pre-splitting and thread-farming also makes me feel a bit insecure, since I'm not sure how such a "push model" works together with callback's "pull model". – user3078414 Jan 19 '15 at 19:51
  • I shall accept this answer having successfully completed individual research, because it was helpful in showing me most valid research directions. If someone else runs into such problem, there's a decently documented series of my posts and discussions about learning solving this kind of problem. Thanks! – user3078414 Feb 11 '16 at 12:56
  • Using GCD inside CoreAudio is not advised. Especially from the render thread. This answer should not be the accepted answer because of this. When working in a real time context you have to be in complete control of the code. You can't call anything that may block and GCD does not give you that guarantee because it can allocate memory. Allocating memory is a potentially blocking. – Aran Mulholland Feb 05 '17 at 01:16
  • See this article for an excellent rundown of real time programming http://www.rossbencina.com/code/real-time-audio-programming-101-time-waits-for-nothing – Aran Mulholland Feb 05 '17 at 01:16
0

I am not completely sure, but I do not think this is possible. Of course, you can use the Accelerate.framework by Apple, which uses the available resources. But

A render callback lives on a real-time priority thread on which subsequent render calls arrive asynchronously. Apple Documentation

On user level you are not able to create such threads.

By the way, these slides by Godfrey van der Linden may be interesting to you.

Michael Dorner
  • 17,587
  • 13
  • 87
  • 117
  • Thanks for your comment. I'm afraid vectorization and matrix calculus cannot apply to the problem I'm dealing with. Thanks for the _van der Linden_ session link. – user3078414 Jan 19 '15 at 19:30
  • As a user you can create a real-time priority thread using pthreads. You can create a thread that has the same priority as the CoreAudio render thread. – Aran Mulholland Feb 05 '17 at 01:13