I'm doing some development on the custom sampler and audio engine for my (iPhone 4+) app, particularly adding recording and send effect features. I'm stuck trying to decide whether to go down the route of having everything handled in one big RemoteIO render callback or breaking it up into separate AU nodes.
Might anyway know whether a more complex AUGraph with multiple RemoteIOs and a mixer AU to sum it all imposes significant overhead compared to doing it all in a single, well-tuned render callback? Is there any other reason why one would want to go one way or the other (such as perhaps the AU boundaries clipping/truncating the audio)?
Performance is big issue and I'd probably just go with the single render callback but I don't want to miss out on the ever growing list of fx AUs available.