1

I've had to work on a low level audio solution for an application that has very specific audio requirements, including some time stretching and pitch shifting algorithms done with a c++ third party code that's beyond the scope of this question. However I'm stuck in the implementation of AudioQueueOutputCallback function. I've previously tried to implement an alternative solution using AVAudioEngine and AudioUnits but found it to be not suitable to my needs due to the dependency on the third party c++ code, which was designed to work with audio queue services.

The audio engine is written in Swift, but I've created a helper class in Obj-C because I couldn't get my head around managing the pointers in swift. At the moment the swift solution calls the Obj-C helper to do all the audio processing involved, like so:

///Core Audio Engine for the app.
final class AudioEngine {
    
    //Properties go here...

    //MARK: - realtime audio processing
    private let audioQueueCallback: AudioQueueOutputCallback = { data, queue, buffer in
        guard let data = data else {
            print("audio queue no data!")
            return
        }
        
        let engine = Unmanaged<AudioEngine>.fromOpaque(data).takeUnretainedValue()
        guard let audioBuffer = engine.audioBuffer else {
            print("no input buffer data, stopping.")
            engine.stop()
            return
        }
        
        guard let description = engine.audioDescription else {
            print("no audio description, stopping.")
            engine.stop()
            return
        }
        
        engine.position = AudioQueueCallbackHelper.processAudioQueue(queue,
                                                                     audioDescription: description,
                                                                     audioBuffer: audioBuffer,
                                                                     buffer: buffer,
                                                                     position: engine.position)
    }

    //Other methods go here...

}

Inside the audio queue helper, I'm trying to map the audio data from a AVAudioPCMBuffer object to the output AudioQueueBufferRef, while keeping track of the position of the audio frames. But no matter what I try all I get is noise coming out. I'm obviously missing or not understanding something crucial here. If anybody has experience with low level audio code and could assist me with this I would be very grateful.

@implementation AudioQueueCallbackHelper

//MARK: - realtime audio processing
+ (UInt64)processAudioQueue:(AudioQueueRef)audioQueue
           audioDescription:(AudioStreamBasicDescription)description
                audioBuffer:(AVAudioPCMBuffer *)audioBuffer
                     buffer:(AudioQueueBufferRef)buffer
                   position:(UInt64)position {
    const int frameCount = buffer->mAudioDataByteSize/description.mBytesPerFrame;
    float **data = audioBuffer.floatChannelData;
    
    float *bufferData = (float*)buffer->mAudioData;
    for (int i=0; i < frameCount; i++) {
        for (int channel=i; channel < description.mChannelsPerFrame; channel++) {
            *bufferData = data[channel][i + position];
            ++bufferData;
        }
    }

    //TODO: use third party library code here to do time stretching / pitch shifting algorithm

    AudioQueueEnqueueBuffer(audioQueue,
                            buffer,
                            0,
                            NULL);
    
    return position + frameCount;
}

@end
Danny Bravo
  • 4,534
  • 1
  • 25
  • 43
  • 1
    You're treating the `AudioQueueBuffer`s as interleaved float. Are you sure that's what their format is? What happens if the incoming `AVAudioPCMBuffer` has fewer samples than requested by the `AudioQueueBuffer`? It looks like you'll be reading off the end of the audio data memory in that case. Also I assume you've tried running the code with the third party stuff turned off. – Rhythmic Fistman Mar 15 '21 at 14:47
  • Both valid points, thank you. I've been running the code without the third party stuff on as I wanted to get it to process audio. As you pointed out, the issue was that the description for the audio buffer had a different configuration to what I was reading from the AudioQueueBuffer. I've not yet added code to handle the end of file, but it's next on my list to do. – Danny Bravo Mar 19 '21 at 11:42
  • Should I turn the comment into an answer? – Rhythmic Fistman Mar 19 '21 at 17:03

0 Answers0