I have written a sample project in Swift to try out the relatively new Core Audio V3 API stuff. Everything seems to work around creating a custom Audio Unit and loading it in process. But the actual audio rendering isn't going so well. I've often read that the rendering code needs to be in C or C++ but I've also heard Swift is fast and thought I could write some minimal audio rendering code in it.
the rendering code
override var internalRenderBlock: AUInternalRenderBlock {
get {
return {
(_ actionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>,
_ timeStamp: UnsafePointer<AudioTimeStamp>,
_ frameCount: AUAudioFrameCount,
_ outputBusNumber: Int,
_ bufferList: UnsafeMutablePointer<AudioBufferList>,
_ renderEvent: UnsafePointer<AURenderEvent>?,
_ pull: AudioToolbox.AURenderPullInputBlock?) -> AUAudioUnitStatus in
let bufferList = bufferList.pointee
let theBuffers = bufferList.mBuffers // only one (AudioBuffer) ??
guard let theBufferData = theBuffers.mData?.assumingMemoryBound(to: Float.self) else {
return 1 // come up with better error?
}
let amountFrames = Int(frameCount)
for frame in 0...amountFrames / 2 {
let frame = theBufferData.advanced(by: frame)
frame.pointee = sin(self.phase)
self.phase += 0.0001
}
return noErr
}
}
}
Sounds Bad
The resulting sound is not what I'd expect. My initial thoughts are that Swift is the wrong choice. Yet Interestingly, AudioToolbox
does provide a typealias for this AUAudioUnit
's rendering property which looks like:
public typealias AUInternalRenderBlock = (UnsafeMutablePointer<AudioUnitRenderActionFlags>, UnsafePointer<AudioTimeStamp>, AUAudioFrameCount, Int, UnsafeMutablePointer<AudioBufferList>, UnsafePointer<AURenderEvent>?, AudioToolbox.AURenderPullInputBlock?) -> AUAudioUnitStatus
This would lead me to believe that it is perhaps possible to write rendering code in Swift.
observed problems
But still, there are a few things going wrong here. (aside from my obvious lack of competency with Swift memory management stuff).
A)
despite theBuffers
saying that its mNumberOfBuffers
is 2, theBuffers
winds up not being an array but rather of type (AudioBuffer)
. I don't understand the need for parenthesis. I can't find a second AudioBuffer
.
B)
more importantly, when I write a basic sin wave to the one AudioBuffer
I can access, the resulting sound is distorted and inconsistent. Could this be Swift's fault? Is it just impossible to write any audio unit rendering code in Swift? Or have a made some assumptions here that is breaking my rendering somehow?
Finally
If it is simply the case that writing this part in Swift is infeasible, then I would like to have some resources on interoperating Swift and C for Audio Unit rendering blocks. So, could the property returning the closure be written in Swift, but the closure's implementation calls down into C? or does the property have to simply return a C function whose prototype matches the closure's type?
Thanks in advance.
The rest of this project can be seen here for context.