I've been working on trying to grasp the idea behind AUAudioUnits and wrote down the sample code in Xcode given in the presentation video from Apple's WWDC 2016 which introduces the topic. It turns out that this code was written for Swift 2, and Swift 3 introduced a new way to do pointers (as seen here and here). Now I'm fairly new to programming with Swift and not familiar with some of it's concepts, and I could not figure out how to perform the conversion from Swift 2 to Swift 3 manually. Even using the Build Setting
Use Legacy Swift Language Version = yes
I was not able to get it running.
Here is the code for Swift 2, which is exactly the code from the video:
import Foundation
import AVFoundation
class SquareWaveGenerator {
let sampleRate: Double
let frequency: Double
let amplitude: Float
var counter: Double = 0.0
init(sampleRate: Double, frequency: Double, amplitude: Float) {
self.sampleRate = sampleRate
self.frequency = frequency
self.amplitude = amplitude
}
func render(buffer: AudioBuffer) {
let nframes = Int(buffer.mDataByteSize) / sizeof(Float)
var ptr = UnsafeMutablePointer<Float>(buffer.mData)
var j = self.counter
let cycleLength = self.sampleRate / self.frequency
let halfCycleLength = cycleLength / 2
let amp = self.amplitude, minusAmp = -amp
for _ in 0..<nframes {
if j < halfCycleLength {
ptr.pointee = amp
} else {
ptr.pointee = minusAmp
}
ptr = ptr.successor()
j += 1.0
if (j > cycleLength) {
j -= cycleLength
}
}
self.counter = j
}
}
func main() {
//Create an AudioComponentDescription for the input/output unit we want to use.
#if os(iOS)
let kOutputUnitSubType = kAudioUnitSubType_RemoteIO
#else
let kOutputUnitSubType = kAudioUnitSubType_HALOutput
#endif
let ioUnitDesc = AudioComponentDescription(
componentType: kAudioUnitType_Output,
componentSubType: kOutputUnitSubType,
componentManufacturer: kAudioUnitManufacturer_Apple,
componentFlags: 0,
componentFlagsMask: 0)
let ioUnit = try! AUAudioUnit(componentDescription: ioUnitDesc, options: AudioComponentInstantiationOptions())
/*
Set things up to render at the same sample rate as the hardware,
up to 2 channels. Note that the hardware format may not be a standard
format, so we make a separate render format with the same sample rate
and the desired channel count.
*/
let hardwareFormat = ioUnit.outputBusses[0].format
let renderFormat = AVAudioFormat(standardFormatWithSampleRate: hardwareFormat.sampleRate, channels: min(2,hardwareFormat.channelCount))
try! ioUnit.inputBusses[0].setFormat(renderFormat)
// Create square wave generators.
let generatorLeft = SquareWaveGenerator(sampleRate: renderFormat.sampleRate, frequency: 440.0, amplitude: 0.1)
let generatorRight = SquareWaveGenerator(sampleRate: renderFormat.sampleRate, frequency: 440.0, amplitude: 0.1)
// Install a block which will be called to render.
ioUnit.outputProvider = { (actionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>, timestamp: UnsafePointer<AudioTimeStamp>, frameCount: AUAudioFrameCount, busIndex: Int, rawBufferList: UnsafeMutablePointer<AudioBufferList>) -> AUAudioUnitStatus in
let bufferList = UnsafeMutableAudioBufferListPointer(rawBufferList)
if bufferList.count > 0 {
generatorLeft.render(bufferList[0])
if bufferList.count > 1 {
generatorRight.render(bufferList[1])
}
}
return noErr
}
// Allocate render resources, then start the audio hardware.
try! ioUnit.allocateRenderResources()
try! ioUnit.startHardware()
sleep(3)
ioUnit.stopHardware()
}
main()
This code:
ptr.pointee = amp
[...]
ptr.pointee = minusAmp
Throws the following error:
Value of type 'UnsafeMutablePointer' has no member 'pointee'
As I was unable to resolve this issue manually, I tried to manually convert the code to Swift 3, hoping that the issue would be resolved then. Here it is:
import Foundation
import AVFoundation
class SquareWaveGenerator {
let sampleRate: Double
let frequency: Double
let amplitude: Float
var counter: Double = 0.0
init(sampleRate: Double, frequency: Double, amplitude: Float) {
self.sampleRate = sampleRate
self.frequency = frequency
self.amplitude = amplitude
}
func render(buffer: AudioBuffer) {
let nframes = Int(buffer.mDataByteSize) / MemoryLayout<Float>.size
var ptr = buffer.mData
var j = self.counter
let cycleLength = self.sampleRate / self.frequency
let halfCycleLength = cycleLength / 2
let amp = self.amplitude, minusAmp = -amp
for _ in 0..<nframes {
if j < halfCycleLength {
ptr?.pointee = amp
} else {
ptr?.pointee = minusAmp
}
ptr = ptr?.advanced(by: 1)
j += 1.0
if (j > cycleLength) {
j -= cycleLength
}
}
self.counter = j
}
}
func main() {
//Create an AudioComponentDescription for the input/output unit we want to use.
#if os(iOS)
let kOutputUnitSubType = kAudioUnitSubType_RemoteIO
#else
let kOutputUnitSubType = kAudioUnitSubType_HALOutput
#endif
let ioUnitDesc = AudioComponentDescription(
componentType: kAudioUnitType_Output,
componentSubType: kOutputUnitSubType,
componentManufacturer: kAudioUnitManufacturer_Apple,
componentFlags: 0,
componentFlagsMask: 0)
let ioUnit = try! AUAudioUnit(componentDescription: ioUnitDesc, options: AudioComponentInstantiationOptions())
/*
Set things up to render at the same sample rate as the hardware,
up to 2 channels. Note that the hardware format may not be a standard
format, so we make a separate render format with the same sample rate
and the desired channel count.
*/
let hardwareFormat = ioUnit.outputBusses[0].format
let renderFormat = AVAudioFormat(standardFormatWithSampleRate: hardwareFormat.sampleRate, channels: min(2,hardwareFormat.channelCount))
try! ioUnit.inputBusses[0].setFormat(renderFormat)
// Create square wave generators.
let generatorLeft = SquareWaveGenerator(sampleRate: renderFormat.sampleRate, frequency: 440.0, amplitude: 0.1)
let generatorRight = SquareWaveGenerator(sampleRate: renderFormat.sampleRate, frequency: 440.0, amplitude: 0.1)
// Install a block which will be called to render.
ioUnit.outputProvider = { (actionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>, timestamp: UnsafePointer<AudioTimeStamp>, frameCount: AUAudioFrameCount, busIndex: Int, rawBufferList: UnsafeMutablePointer<AudioBufferList>) -> AUAudioUnitStatus in
let bufferList = UnsafeMutableAudioBufferListPointer(rawBufferList)
if bufferList.count > 0 {
generatorLeft.render(buffer: bufferList[0])
if bufferList.count > 1 {
generatorRight.render(buffer: bufferList[1])
}
}
return noErr
}
// Allocate render resources, then start the audio hardware.
try! ioUnit.allocateRenderResources()
try! ioUnit.startHardware()
sleep(3)
ioUnit.stopHardware()
}
main()
Where again I run into above error
Value of type 'UnsafeMutablePointer' has no member 'pointee'
In the end, I figured that something like
ptr?.storeBytes(of: T, as: T.Type)
should be able to replace the "pointee" construction. If I understood it correctly, "T" is the value which I'd like to store at the position of the pointer. In my case, that would be "amp". "amp" is of type Float.
But no matter what I did, I could not get the code to run. It just would not accept anything like
ptr?.storeBytes(of: amp, as: Float())
throwing
Cannot convert value of type 'Float' to expected argument type 'T.Type'
or
ptr?.storeBytes(of: amp, as: Float.self)
throwing no immediate error anymore and compiling properly, but when running, getting the lldb error message
fatal error: storeBytes to misaligned raw pointer
In essence, I have no idea what I am doing anymore, not understanding the concept of 'T.Type' in this context, and I am stuck. So I have two questions:
1) How do I solve this issue and get the code running?
2) Where can I learn more about these types of constructions à la Type which will help me understand what they are and what they mean?