1

I'm doing some development on the custom sampler and audio engine for my (iPhone 4+) app, particularly adding recording and send effect features. I'm stuck trying to decide whether to go down the route of having everything handled in one big RemoteIO render callback or breaking it up into separate AU nodes.

Might anyway know whether a more complex AUGraph with multiple RemoteIOs and a mixer AU to sum it all imposes significant overhead compared to doing it all in a single, well-tuned render callback? Is there any other reason why one would want to go one way or the other (such as perhaps the AU boundaries clipping/truncating the audio)?

Performance is big issue and I'd probably just go with the single render callback but I don't want to miss out on the ever growing list of fx AUs available.

Hari Honor
  • 8,677
  • 8
  • 51
  • 54

1 Answers1

1

Generally speaking, I'd prefer the one render callback, but if you plan on reordering the effect chain it might be easier to go with AUGraph.

Also, rather than dealing with AudioUnits directly, you should check out Novocaine, which does all of the nasty AU interfacing for you and gives you instead a clean block-based callback.

Nik Reiman
  • 39,067
  • 29
  • 104
  • 160
  • What are your reasons for preferring one render callback? Thx for the Novocaine heads up, though I'm in far too deep to use it in my current project! – Hari Honor Jul 12 '12 at 10:59
  • My reasons for the single callback is that AU programming is a pain in the ass, and doing a single callback allows you to take the routing in your own hand. – Nik Reiman Jul 12 '12 at 15:31
  • Ha! Great answer for the "any other reason" part. I'm interested as well though in any overhead imposed by the AU graph or interconnections. – Hari Honor Jul 17 '12 at 17:33
  • @HariKaramSingh I can't really speak to that, but it would be pretty easy to test in Instruments by making a simple gain effect, then making like 20x of them and putting them in a manual chain vs. AUGraph. – Nik Reiman Jul 19 '12 at 07:03