The example code for creating a version 3 AudioUnit demonstrates how the implementation needs to return a function block for rendering processing. The block will both get samples from the previous
AxudioUnit in the chain via pullInputBlock
and supply the output buffers with the processed samples. It also must provide some output buffers if the unit further down the chain did not. Here is an excerpt of code from an AudioUnit subclass:
- (AUInternalRenderBlock)internalRenderBlock {
/*
Capture in locals to avoid ObjC member lookups.
*/
// Specify captured objects are mutable.
__block FilterDSPKernel *state = &_kernel;
__block BufferedInputBus *input = &_inputBus;
return Block_copy(^AUAudioUnitStatus(
AudioUnitRenderActionFlags *actionFlags,
const AudioTimeStamp *timestamp,
AVAudioFrameCount frameCount,
NSInteger outputBusNumber,
AudioBufferList *outputData,
const AURenderEvent *realtimeEventListHead,
AURenderPullInputBlock pullInputBlock) {
...
});
This is fine if the processing does not require knowing the frameCount
before the call to this block, but many applications do require knowing the frameCount
before this block in order to allocate memory, prepare processing parameters, etc.
One way around this would be to accumulate past buffers of output, outputting only frameCount
samples each call to the block, but this only works if there is known minimum frameCount
. The processing must be initialized with a size greater than this frame count in order to work. Is there a way to specify or obtain a minimum value for frameCount
or force it to be a specific value?
The example code is taken from: https://github.com/WildDylan/appleSample/blob/master/AudioUnitV3ExampleABasicAudioUnitExtensionandHostImplementation/FilterDemoFramework/FilterDemo.mm