0

I am trying to record and play simultaneous with swift. I need to play in left channel and right channel respectively. I am successful in record and play in one channel using AudioUnit. But after I try to use two buffers to control two channels, both of them are mute. Here’s how I set the format:

    var audioFormat = AudioStreamBasicDescription()
    audioFormat.mSampleRate = Double(sampleRate)
    audioFormat.mFormatID = kAudioFormatLinearPCM
    audioFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked
    audioFormat.mChannelsPerFrame = 2
    audioFormat.mFramesPerPacket = 1
    audioFormat.mBitsPerChannel = 32
    audioFormat.mBytesPerPacket = 8
    audioFormat.mReserved = 0

And here is my input callback

    private let inputCallback: AURenderCallback = {(
    inRefCon,
    ioActionFlags,
    inTimeStamp,
    inBusNumber,
    inNumberFrames,
    ioData) -> OSStatus in
    let audioRAP:AudioUnitSample = Unmanaged<AudioUnitSample>.fromOpaque(inRefCon).takeUnretainedValue()
    var status = noErr;
    var buf = UnsafeMutableRawPointer.allocate(bytes: Int(inNumberFrames * 4),
                                               alignedTo: MemoryLayout<Int8>.alignment)
    let bindptr = buf.bindMemory(to: Float.self,
                                 capacity: Int(inNumberFrames * 4))
    bindptr.initialize(to: 0)
    var buffer: AudioBuffer = AudioBuffer(mNumberChannels: 2,
                             mDataByteSize: inNumberFrames * 4,
                             mData: buf)

    memset(buffer.mData, 0, Int(buffer.mDataByteSize))
    var bufferList: AudioBufferList = AudioBufferList(mNumberBuffers: 1,
                                     mBuffers: buffer)(Int(bufferList.mBuffers.mDataByteSize))")

    status = AudioUnitRender(audioRAP.newAudioUnit!,
                             ioActionFlags,
                             inTimeStamp,
                             inBusNumber,
                             inNumberFrames,
                             &bufferList)
   audioRAP.audioBuffers.append((bufferList.mBuffers,Int(inNumberFrames * 4)))

    return status
}

Here is my output callback:

    private let outputCallback:AURenderCallback = {
    (inRefCon,
    ioActionFlags,
    inTimeStamp,
    inBusNumber,
    inNumberFrames,
    ioData) -> OSStatus in
    let audioRAP:AudioUnitSample = Unmanaged<AudioUnitSample>.fromOpaque(inRefCon).takeUnretainedValue()
    if ioData == nil{
        return noErr
    }
    ioData!.pointee.mNumberBuffers = 2
    var bufferCount = ioData!.pointee.mNumberBuffers

        var tempBuffer = audioRAP.audioBuffers[0]

        var monoSamples = [Float]()
        let ptr1 = tempBuffer.0.mData?.assumingMemoryBound(to: Float.self)
        monoSamples.removeAll()
        monoSamples.append(contentsOf: UnsafeBufferPointer(start: ptr1, count: Int(inNumberFrames)))

        let abl = UnsafeMutableAudioBufferListPointer(ioData)
        let bufferLeft = abl![0]
        let bufferRight = abl![1]
        let pointerLeft: UnsafeMutableBufferPointer<Float32> = UnsafeMutableBufferPointer(bufferLeft)
        let pointerRight: UnsafeMutableBufferPointer<Float32> = UnsafeMutableBufferPointer(bufferRight)

        for frame in 0..<inNumberFrames {
            let pointerIndex = pointerLeft.startIndex.advanced(by: Int(frame))
            pointerLeft[pointerIndex] = monoSamples[Int(frame)]
        }
        for frame in 0..<inNumberFrames {
            let pointerIndex = pointerRight.startIndex.advanced(by: Int(frame))
            pointerRight[pointerIndex] = monoSamples[Int(frame)]
        }

        tempBuffer.0.mData?.deallocate(bytes:tempBuffer.1, alignedTo: MemoryLayout<Int8>.alignment)
        audioRAP.audioBuffers.removeFirst()
    return noErr
}

Here is the declaration for the audiobuffer:

    private var audioBuffers = [(AudioBuffer, Int)]()

Did I miss something on output or input part? Any help would be really appreciated!

Kaya Zhou
  • 13
  • 5
  • In the 2017 WWDC session on what's New in Core Audio (video available), an Apple engineer recommended not using Swift or Objective C code in the audio context. Straight C code (or possibly Swift code that translates to straight C code with no memory management) is likely OK inside Audio Unit callbacks. – hotpaw2 Nov 26 '17 at 00:51
  • Yep, seems like the Core Audio is not translate to swift very well. But I still need find some way to do it for some reason. – Kaya Zhou Nov 26 '17 at 01:55

1 Answers1

0

The first big problem is that your code is doing memory allocation inside the audio callback. Apple documentation explicitly say that no memory management, synchronization, or even object messaging is supposed to be done inside the audio context. Inside the audio callback, you might want to stick with only doing data copies of audio samples to/from pre-allocated buffers. Everything else (especially buffer creation and deallocation) should be done outside the audio callback.

hotpaw2
  • 70,107
  • 14
  • 90
  • 153
  • Thanks for you answer and yes you are right. I should not manage memory inside the audio callback. What I am trying to do is after I preform AudioUnitRender in inputCallback. I can get data from bufferList. When I send it back to ioData in outputcallback, I do hear the sound from my speaker. I guess ioData should have two buffer which can control left channel and right channel. So I change ioData!.pointee.mNumberBuffers = 2, and I find the second buffer is not allocation.(That way I try to manage the memory) I believe ioData is not set data to left channel and right channel by two buffers. – Kaya Zhou Nov 26 '17 at 01:44
  • I still don't know how to mute left channel or right channel? Any hint? @hotpaw2 – Kaya Zhou Nov 27 '17 at 05:31