2

I am writing an iPhone app where I need to capture audio from the mic and stream the it to a streaming server in AAC format. So I first capture the audio and then use the

AudioConverterFillComplexBuffer

method for converting the audio to AAC.

Below is the code

- (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{

   NSArray *audioChannels = connection.audioChannels;
   if (audioChannels == nil || [audioChannels count]==0) { 
        // NSLog(@"We have Video Frame");      
        [_encoder encodeFrame:sampleBuffer];
   }else{
        // NSLog(@"We have Audio Frame");
        if (hasAudio) {
            CMTime prestime =          CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
        double dPTS = (double)(prestime.value) / prestime.timescale;         

        [self getAudioBufferDataFromCMSampleBufferRef:sampleBuffer];      

        // describe output data buffers into which we can receive data.
        AudioBufferList outputBufferList;
        outputBufferList.mNumberBuffers = 1;
        outputBufferList.mBuffers[0].mNumberChannels = _aacASBD.mChannelsPerFrame;
        outputBufferList.mBuffers[0].mDataByteSize = _aacBufferSize;
        outputBufferList.mBuffers[0].mData = _aacBuffer;

        OSStatus st = AudioConverterFillComplexBuffer(_converter,   &putPcmSamplesInBufferList, (__bridge void *) self, &_numOutputPackets, &outputBufferList, NULL);

        if (0 == st) {             
            [_rtsp onAudioData:_aacBuffer :outputBufferList.mBuffers[0].mDataByteSize :dPTS];
        }else{
            NSLog(@"Error converting Buffer");
            NSError *error = [NSError errorWithDomain:NSOSStatusErrorDomain code:st userInfo:nil];
            NSLog([self OSStatusToStr :st] );
            char * str = new char[3];
            FormatError(str, st);

        } 

        if (_blockBuffer) // Double check that what you are releasing actually exists!
        {
            CFRelease(_blockBuffer);
        }          
  }


}

The code for the getAudioBufferDataFromCMSampleBufferRef is as below

- (AudioBuffer) getAudioBufferDataFromCMSampleBufferRef: (CMSampleBufferRef)audioSampleBuffer{

     AudioBufferList audioBufferList;  

     OSStatus err =      CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(audioSampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &_blockBuffer);
     AudioBuffer audioBuffer ;

     if (!err && _blockBuffer && audioBufferList.mBuffers[0].mData && (audioBufferList.mBuffers[0].mDataByteSize > 0))
     {
         for( int y=0; y<audioBufferList.mNumberBuffers; y++ )
         {
             audioBuffer = audioBufferList.mBuffers[y];           
             break;
         }
     }

    inputBuffer.mData=audioBuffer.mData;
    inputBuffer.mDataByteSize=audioBuffer.mDataByteSize;
    inputBuffer.mNumberChannels=1;

    return audioBuffer;
}

In the above version of the code I get a BAD_ACCESS error. If instead, I remove the code that releases the blockBuffer, there is a memory leak and the app eventually terminates because of memory pressure.

If I don't retain the blockBuffer and write the code of

 getAudioBufferDataFromCMSampleBufferRef

differently as given below

- (AudioBuffer) getAudioBufferDataFromCMSampleBufferRef: (CMSampleBufferRef)audioSampleBuffer
{
    _blockBuffer = CMSampleBufferGetDataBuffer(audioSampleBuffer);
    int audioBufferByteSize =    CMSampleBufferGetTotalSampleSize(audioSampleBuffer);                   

CMBlockBufferCopyDataBytes(_blockBuffer,0,audioBufferByteSize,inputBuffer.mData);
    inputBuffer.mDataByteSize=audioBuffer.mDataByteSize;
    inputBuffer.mNumberChannels=1;
}

In this version the blockbuffer is not retained so no need to release it. However now I get a terrible static in the audio.

Anybody have an idea on how to solve this issue?

Thanks, Ozgur

ozguronur
  • 1,223
  • 1
  • 8
  • 7

1 Answers1

1

One way this is commonly done it to copy all the audio sample data from the unretained buffer into your own (pre-allocated and retained, lock-free) circular fifo buffer. Then ignore the unretained buffer, as it will be released at some point. Use the audio data in your own circular buffer.

hotpaw2
  • 70,107
  • 14
  • 90
  • 153
  • In the second (alternative) implementation of getAudioBufferDataFromCMSampleBufferRef, I have used CMBlockBufferCopyDataBytes(_blockBuffer,0,audioBufferByteSize,inputBuffer.mData); to copy the blockBuffer into my own buffer inputBuffer.mData. However in this case there is a strong static in the audio. inputBuffer is of type AudioBuffer. Do you have any idea how I can get rid of the static? – ozguronur Jul 16 '15 at 19:01
  • Is your own buffer a lock-free circular buffer multiple times larger than a single AudioBuffer? If not, you might be overwriting data before you finish using it. – hotpaw2 Jul 16 '15 at 20:42
  • This is the way to go, especially if data gets encoded. The codec will want a certain number of frames for each packet. If there aren't enough samples left to complete a packet AudioConverterFillComplexBuffer should fail, and then roll the remaining samples into the next packet when there are more samples In your buffer. – dave234 Jul 17 '15 at 16:24