6

I have been exploring positional audio in Scene Kit / ARKit and would like to see if it’s possible to use AudioKit in this context to benefit from its higher level tools for analysis, sound generation, effects, etc.

For some background, a SCNView comes with an AVAudioEnvironmentNode, an AVAudioEngine, and an audioListener (SCNNode). These properties come initialized, with the environment node already configured to the engine. A positional audio source is added to the scene via a SCNAudioPlayer, which can be initialized with an AVAudioNode - the AVAudioNode must be connected to the environment node and have a mono output.

The SCNAudioPlayer is then added to a SCNNode in the scene and automatically takes care of modifying the output of the AVAudioNode according to its position in the scene as well as the position and orientation of the audioListener node.

My hope is that it will be possible to initialize AudioKit with the AVAudioEngine property of a SCNView, configure the SCNView’s environment node within the engine’s node graph, use the AVAudioNode property of AKNodes to initialize SCNAudioPlayers, and ensure all sources properly connect to the environment node. I’ve already began modifying AudioKit source code, but having trouble figuring out what classes I will need to adapt and how to integrate the environment node into the AudioKit pipeline. In particular I’m having trouble understanding connectionPoints and the outputNode property.

Does anyone believe this might be not possible given how AudioKit is structured or have any pointers about approach?

I will of course be happy to share any of my findings.

cgmaier
  • 91
  • 3
  • 2
    This sounds like an admirable goal. At one point, there was a 3D audio example included with AK. What you are proposing sounds like it would be a valuable addition to AK. Would you like an invitation to the AK Slack group? If so, my email is matthew@audiokitpro.com – analogcode Nov 02 '17 at 20:33
  • Awesome, thanks! I'll shoot you an email. – cgmaier Nov 03 '17 at 13:37

1 Answers1

1

AudioKit creates its own instance of AVAudioEngine at line 38 of AudioKit.swift:

https://github.com/AudioKit/AudioKit/blob/master/AudioKit/Common/Internals/AudioKit.swift#L38

but it is open, so it should be possible to overwrite it with SceneView's audio engine. I don't see anything that would prevent it.

Aurelius Prochazka
  • 4,510
  • 2
  • 11
  • 34
  • I have a bit of an update - I've deemed that the audio engine that comes with a scene view may not be suitable for AudioKit at the moment, unfortunately. The Scene View audioEngine's inputNode property (AVAudioInputNode) often comes with a sample rate of 0, requiring another engine to facilitate recording from the mic. This, as well as other idiosyncrasies I've found regarding audio session / engine setup, leads me to believe the full feature set of AVAudioEngine may not be reliable in the context of Scene Kit / ARKit quite yet. – cgmaier Dec 01 '17 at 15:55