0

I'm building an AUGraph, and trying to get audio from the input device via an AVCaptureAudioDataOutput delegate method.

The use of an AVCaptureSession is a consequence of the problem explained here. I succesfully managed to build an audio playthrough with this method via a CARingbuffer, as explained in the book Learning Core Audio. But, getting data from the CARingbuffer implies to provide a valid sample time, and when I stop the AVCaptureSession the sample times from the AVCaptureOutput and the unit input callback are no more synced. So, I'm trying now to use the Michael Tyson's TPCircularBuffer, which seems to be excellent, according to what I've read. But, even with some examples I've found, I'm not able to get some audio from it (or only cracks).

My graph look like this :

AVCaptureSession -> callback -> AUConverter -> ... -> HALOutput

And here is the code of my AVCaptureOutput method

- (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{

CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer);
const AudioStreamBasicDescription *sampleBufferASBD = CMAudioFormatDescriptionGetStreamBasicDescription(formatDescription);

if (kAudioFormatLinearPCM != sampleBufferASBD->mFormatID) {

    NSLog(@"Bad format or bogus ASBD!");
    return;

}

if ((sampleBufferASBD->mChannelsPerFrame != _audioStreamDescription.mChannelsPerFrame) || (sampleBufferASBD->mSampleRate != _audioStreamDescription.mSampleRate)) {

    _audioStreamDescription = *sampleBufferASBD;
    NSLog(@"sample input format changed");

}




CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer,
                                                        NULL,
                                                        _currentInputAudioBufferList,
                                                        CAAudioBufferList::CalculateByteSize(_audioStreamDescription.mChannelsPerFrame),
                                                        kCFAllocatorSystemDefault,
                                                        kCFAllocatorSystemDefault,
                                                        kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
                                                        &_blockBufferOut);


TPCircularBufferProduceBytes(&_circularBuffer, _currentInputAudioBufferList->mBuffers[0].mData, _currentInputAudioBufferList->mBuffers[0].mDataByteSize);

And the render callback :

OSStatus PushCurrentInputBufferIntoAudioUnit(void inRefCon,
                                             AudioUnitRenderActionFlags *   ioActionFlags,
                                             const AudioTimeStamp *         inTimeStamp,
                                             UInt32                         inBusNumber,
                                             UInt32                         inNumberFrames,
                                             AudioBufferList *              ioData)
{

ozAVHardwareInput *hardWareInput = (ozAVHardwareInput *)inRefCon;
TPCircularBuffer circularBuffer = [hardWareInput circularBuffer];

Float32 *targetBuffer = (Float32 *)ioData->mBuffers[0].mData;

int32_t availableBytes;
TPCircularBufferTail(&circularBuffer, &availableBytes);
UInt32 dataSize = ioData->mBuffers[0].mDataByteSize;

if (availableBytes > ozAudioDataSizeForSeconds(3.)) {

    // There is too much audio data to play -> clear buffer & mute output
    TPCircularBufferClear(&circularBuffer);

    for(UInt32 i = 0; i < ioData->mNumberBuffers; i++)
        memset(ioData->mBuffers[i].mData, 0, ioData->mBuffers[i].mDataByteSize);

} else if (availableBytes > ozAudioDataSizeForSeconds(0.5)) {

    // SHOULD PLAY
    Float32 *cbuffer = (Float32 *)TPCircularBufferTail(&circularBuffer, &availableBytes);
    int32_t min = MIN(dataSize, availableBytes);

    memcpy(targetBuffer, cbuffer, min);
    TPCircularBufferConsume(&circularBuffer, min);
    ioData->mBuffers[0].mDataByteSize = min;

} else {

    // No data to play -> mute output
    for(UInt32 i = 0; i < ioData->mNumberBuffers; i++)
        memset(ioData->mBuffers[i].mData, 0, ioData->mBuffers[i].mDataByteSize);
}

return noErr;

}

The TPCIrcularBuffer is fed with the AudioBufferList but nothing output, or sometimes only cracks.

What am I doing wrong ?

Community
  • 1
  • 1
Benoît Lahoz
  • 1,270
  • 1
  • 18
  • 43
  • 1
    If you're not wed to `TPCircularBuffer` another alternative is my `RingBuffer` class: https://github.com/sbooth/SFBAudioEngine/blob/master/RingBuffer.h – sbooth Jun 01 '14 at 21:48
  • Thank you so much @sbooth it works like a charm and is so easy to use ! – Benoît Lahoz Jun 03 '14 at 10:20

1 Answers1

0

An audio unit render callback should always return inNumberFrames of samples. Check to see how much data your callback is returning.

hotpaw2
  • 70,107
  • 14
  • 90
  • 153
  • I finally used the @sbooth ring buffer and it's working well, but I'll try to change the value I pass as "number frames" to the TPCircularBuffer for testing. – Benoît Lahoz Jun 11 '14 at 08:09