I've had to work on a low level audio solution for an application that has very specific audio requirements, including some time stretching and pitch shifting algorithms done with a c++ third party code that's beyond the scope of this question. However I'm stuck in the implementation of AudioQueueOutputCallback function. I've previously tried to implement an alternative solution using AVAudioEngine and AudioUnits but found it to be not suitable to my needs due to the dependency on the third party c++ code, which was designed to work with audio queue services.
The audio engine is written in Swift, but I've created a helper class in Obj-C because I couldn't get my head around managing the pointers in swift. At the moment the swift solution calls the Obj-C helper to do all the audio processing involved, like so:
///Core Audio Engine for the app.
final class AudioEngine {
//Properties go here...
//MARK: - realtime audio processing
private let audioQueueCallback: AudioQueueOutputCallback = { data, queue, buffer in
guard let data = data else {
print("audio queue no data!")
return
}
let engine = Unmanaged<AudioEngine>.fromOpaque(data).takeUnretainedValue()
guard let audioBuffer = engine.audioBuffer else {
print("no input buffer data, stopping.")
engine.stop()
return
}
guard let description = engine.audioDescription else {
print("no audio description, stopping.")
engine.stop()
return
}
engine.position = AudioQueueCallbackHelper.processAudioQueue(queue,
audioDescription: description,
audioBuffer: audioBuffer,
buffer: buffer,
position: engine.position)
}
//Other methods go here...
}
Inside the audio queue helper, I'm trying to map the audio data from a AVAudioPCMBuffer object to the output AudioQueueBufferRef, while keeping track of the position of the audio frames. But no matter what I try all I get is noise coming out. I'm obviously missing or not understanding something crucial here. If anybody has experience with low level audio code and could assist me with this I would be very grateful.
@implementation AudioQueueCallbackHelper
//MARK: - realtime audio processing
+ (UInt64)processAudioQueue:(AudioQueueRef)audioQueue
audioDescription:(AudioStreamBasicDescription)description
audioBuffer:(AVAudioPCMBuffer *)audioBuffer
buffer:(AudioQueueBufferRef)buffer
position:(UInt64)position {
const int frameCount = buffer->mAudioDataByteSize/description.mBytesPerFrame;
float **data = audioBuffer.floatChannelData;
float *bufferData = (float*)buffer->mAudioData;
for (int i=0; i < frameCount; i++) {
for (int channel=i; channel < description.mChannelsPerFrame; channel++) {
*bufferData = data[channel][i + position];
++bufferData;
}
}
//TODO: use third party library code here to do time stretching / pitch shifting algorithm
AudioQueueEnqueueBuffer(audioQueue,
buffer,
0,
NULL);
return position + frameCount;
}
@end