Just started learning and testing with the iOS AudioUnit framework and made a multichannel mixer with a render callback function in bus 0. In the callback function I synthesize a sound like so:
OSStatus RenderTone(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) {
const double amplitude = 0.5;
iPadAudioViewController *viewController = (__bridge iPadAudioViewController *)inRefCon;
double phase = viewController->phase;
double phase_increment = 2.0 * M_PI * viewController->frequency / viewController->sampleRate;
const int channel = 0;
Float32 *buffer = (Float32 *)ioData->mBuffers[channel].mData;
for (UInt32 frame = 0; frame < inNumberFrames; frame++) {
buffer[frame] = sin(phase) * amplitude;
phase += phase_increment;
if (phase > 2.0 * M_PI) {
phase -= 2.0 * M_PI;
}
}
viewController->phase = phase;
return noErr;
}
What if I wanted to synthesize more sounds in a variable amount of other busses of the multichannel mixer with different values for phase/frequency etc?
I haven't yet found any information on this on Google or the iOS documentation.
Hope someone can get me in the right direction