I am writing an iPhone app where I need to capture audio from the mic and stream the it to a streaming server in AAC format. So I first capture the audio and then use the
AudioConverterFillComplexBuffer
method for converting the audio to AAC.
Below is the code
- (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
NSArray *audioChannels = connection.audioChannels;
if (audioChannels == nil || [audioChannels count]==0) {
// NSLog(@"We have Video Frame");
[_encoder encodeFrame:sampleBuffer];
}else{
// NSLog(@"We have Audio Frame");
if (hasAudio) {
CMTime prestime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
double dPTS = (double)(prestime.value) / prestime.timescale;
[self getAudioBufferDataFromCMSampleBufferRef:sampleBuffer];
// describe output data buffers into which we can receive data.
AudioBufferList outputBufferList;
outputBufferList.mNumberBuffers = 1;
outputBufferList.mBuffers[0].mNumberChannels = _aacASBD.mChannelsPerFrame;
outputBufferList.mBuffers[0].mDataByteSize = _aacBufferSize;
outputBufferList.mBuffers[0].mData = _aacBuffer;
OSStatus st = AudioConverterFillComplexBuffer(_converter, &putPcmSamplesInBufferList, (__bridge void *) self, &_numOutputPackets, &outputBufferList, NULL);
if (0 == st) {
[_rtsp onAudioData:_aacBuffer :outputBufferList.mBuffers[0].mDataByteSize :dPTS];
}else{
NSLog(@"Error converting Buffer");
NSError *error = [NSError errorWithDomain:NSOSStatusErrorDomain code:st userInfo:nil];
NSLog([self OSStatusToStr :st] );
char * str = new char[3];
FormatError(str, st);
}
if (_blockBuffer) // Double check that what you are releasing actually exists!
{
CFRelease(_blockBuffer);
}
}
}
The code for the getAudioBufferDataFromCMSampleBufferRef is as below
- (AudioBuffer) getAudioBufferDataFromCMSampleBufferRef: (CMSampleBufferRef)audioSampleBuffer{
AudioBufferList audioBufferList;
OSStatus err = CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(audioSampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &_blockBuffer);
AudioBuffer audioBuffer ;
if (!err && _blockBuffer && audioBufferList.mBuffers[0].mData && (audioBufferList.mBuffers[0].mDataByteSize > 0))
{
for( int y=0; y<audioBufferList.mNumberBuffers; y++ )
{
audioBuffer = audioBufferList.mBuffers[y];
break;
}
}
inputBuffer.mData=audioBuffer.mData;
inputBuffer.mDataByteSize=audioBuffer.mDataByteSize;
inputBuffer.mNumberChannels=1;
return audioBuffer;
}
In the above version of the code I get a BAD_ACCESS error. If instead, I remove the code that releases the blockBuffer, there is a memory leak and the app eventually terminates because of memory pressure.
If I don't retain the blockBuffer and write the code of
getAudioBufferDataFromCMSampleBufferRef
differently as given below
- (AudioBuffer) getAudioBufferDataFromCMSampleBufferRef: (CMSampleBufferRef)audioSampleBuffer
{
_blockBuffer = CMSampleBufferGetDataBuffer(audioSampleBuffer);
int audioBufferByteSize = CMSampleBufferGetTotalSampleSize(audioSampleBuffer);
CMBlockBufferCopyDataBytes(_blockBuffer,0,audioBufferByteSize,inputBuffer.mData);
inputBuffer.mDataByteSize=audioBuffer.mDataByteSize;
inputBuffer.mNumberChannels=1;
}
In this version the blockbuffer is not retained so no need to release it. However now I get a terrible static in the audio.
Anybody have an idea on how to solve this issue?
Thanks, Ozgur