I am starting to get into Core Audio and yesterday I was thinking about performance on a high level.
Let's say I have one oscillator and one filter - is there then any performance difference in doing these as two units and connect them in the engine, as opposed to doing them as one unit? On the surface one could think that multiple audio units can run in parallell giving better performance on multi core systems, but my understanding is that the realtime thread for the audio is just that - one thread.
So does this mean that there is one thread for the whole graph and that there is no pure performance benefit to splitting things into multiple audio units?