I have an app where I flick the touchscreen and unleash a dot which animates across the screen, reads the pixel color under is, and converts that to audio based on some parameters. This is working great for the most part.
Currently I'm creating one audio channel per dot (iPhone AudioComponent). This works good till I get up to about 15 dots then starts getting "choppy". Dropping audio in/out, etc...
I think if I were to mix the waveform of all of these channels together, then send that waveform out to maybe one or two channels, I could get much better performance for high numbers of dots. This is where I'm looking for advice.
I am assuming for any time t, I can take ((f1(x) + f2(x)) / 2.0). Is this a typical approach to mixing audio signals? This way I can never exceed (normalized) 1.0 .. -1.0, however I'm worried that I'll get the opposite of that; quiet audio. Maybe it won't matter so much if there are so many dots.
If someone can drop the name of any technique for this, I'll go read up on it. Or, any links would be great.