3

I currently have an Xcode project that is based on an AVAudioEngine written in Swift.

At the head of the engine is an AVAudioPlayerNode used to schedule some LPCM audio buffers.

In order for an AUv3 to process the audio, it needs to override the following properties ( Info from here )

  1. Override the inputBusses getter method to return the app extension’s audio input connection points.

  2. Override the outputBusses getter method to return the app extension’s audio output connection points.

  3. Override the internalRenderBlock getter method to return the block that implements the app extension’s audio rendering loop.

    One must also override the allocateRenderResourcesAndReturnError: method, which the host app calls before it starts to render audio, and override the deallocateRenderResources method, which the host app calls after it has finished rendering audio.

Within each override, one must call the AUAudioUnit superclass implementation.

Given a working AVAudioEngine, where or how does one connect the inputbusses, outputbusses and buffers to the internalRenderBlock in the AUv3?

I have a working prototype AUv3 that can load in a host like GarageBand.
What I am trying to do is pass the audio buffers from the AVAudioEngine into the internalRenderBlock of the AUv3 in order to complete the audio pipeline from the AUv3 to its host.

cit
  • 2,465
  • 5
  • 28
  • 36

1 Answers1

3

I have some example code that generates sample data inside an AUv3 internalRenderBlock here:

github.com/hotpaw2/auv3test5

It's written in the C subset of Objective C, since a 2017 WWDC session on audio units said not to use Swift inside the real-time audio context.

hotpaw2
  • 70,107
  • 14
  • 90
  • 153
  • Thanks @hotpaw2 . I ran across your GitHub project recently. It's one of, if not the best AUv3 example project out there. I keep it open to reference how it all works. I am not generating any DSP code as you are in your internalRenderBlock however. I have a working AVAudioEngine ( as you do ) but want to 'redirect' the audio generated from it that normally heads to the speaker from the outputNode into the AUv3. ( If that makes sense ) . In other words I think I may have to put the buffers into some sort of block that the AUv3 can process but not sure how to..I wonder about installTapOnBus()? – cit Sep 17 '19 at 18:36