2

WithEZAudio I want create mono light audioBufferList, as far as it can be. In past I achive 46 bytes per audioBuffer but with relative small bufferDuration. First thing first, if I use below AudioStreamBasicDescription for input and output

AudioStreamBasicDescription audioFormat;
     audioFormat.mBitsPerChannel   = 8 * sizeof(AudioUnitSampleType);
     audioFormat.mBytesPerFrame    = sizeof(AudioUnitSampleType);
     audioFormat.mBytesPerPacket   = sizeof(AudioUnitSampleType);
     audioFormat.mChannelsPerFrame = 2;
     audioFormat.mFormatFlags      = kAudioFormatFlagsCanonical | kAudioFormatFlagIsNonInterleaved;
     audioFormat.mFormatID         = kAudioFormatLinearPCM;
     audioFormat.mFramesPerPacket  = 1;
     audioFormat.mSampleRate       = 44100;

and use TPCircularBuffer as transporter then I get two buffers in bufferList with mDataByteSize 4096 is definitely too much. So I try to use my previous ASBD

audioFormat.mSampleRate         = 8000.00;
audioFormat.mFormatID           = kAudioFormatLinearPCM;
audioFormat.mFormatFlags        = kAudioFormatFlagsCanonical | kAudioFormatFlagIsNonInterleaved;
audioFormat.mFramesPerPacket    = 1;
audioFormat.mChannelsPerFrame   = 1;
audioFormat.mBitsPerChannel     = 8;
audioFormat.mBytesPerPacket     = 1;
audioFormat.mBytesPerFrame      = 1;

Now mDataByteSize is 128 and I have only one buffer but TPCircularBuffer can't handle this properly. I figure it is because I want use only one channel. So atm I rejected TBCB and try encode and decode bytes to NSData or just for test straight passing AudioBufferList but even for first AudioStreamBasicDescription sound is too much distorted.

My current code

-(void)initMicrophone{

    AudioStreamBasicDescription audioFormat;
    //*
     audioFormat.mBitsPerChannel   = 8 * sizeof(AudioUnitSampleType);
     audioFormat.mBytesPerFrame    = sizeof(AudioUnitSampleType);
     audioFormat.mBytesPerPacket   = sizeof(AudioUnitSampleType);
     audioFormat.mChannelsPerFrame = 2;
     audioFormat.mFormatFlags      = kAudioFormatFlagsCanonical | kAudioFormatFlagIsNonInterleaved;
     audioFormat.mFormatID         = kAudioFormatLinearPCM;
     audioFormat.mFramesPerPacket  = 1;
     audioFormat.mSampleRate       = 44100;

     /*/
    audioFormat.mSampleRate         = 8000.00;
    audioFormat.mFormatID           = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags        = kAudioFormatFlagsCanonical | kAudioFormatFlagIsNonInterleaved;
    audioFormat.mFramesPerPacket    = 1;
    audioFormat.mChannelsPerFrame   = 1;
    audioFormat.mBitsPerChannel     = 8;
    audioFormat.mBytesPerPacket     = 1;
    audioFormat.mBytesPerFrame      = 1;

    //*/


    _microphone = [EZMicrophone microphoneWithDelegate:self withAudioStreamBasicDescription:audioFormat];

    _output = [EZOutput outputWithDataSource:self withAudioStreamBasicDescription:audioFormat];
    [EZAudio circularBuffer:&_cBuffer withSize:128];
}

-(void)startSending{
    [_microphone startFetchingAudio];
    [_output startPlayback];
}

-(void)stopSending{
    [_microphone stopFetchingAudio];
    [_output stopPlayback];
}

-(void)microphone:(EZMicrophone *)microphone
 hasAudioReceived:(float **)buffer
   withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels{
    dispatch_async(dispatch_get_main_queue(), ^{
    });
}

-(void)microphone:(EZMicrophone *)microphone
    hasBufferList:(AudioBufferList *)bufferList
   withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels{
//*
        abufferlist = bufferList;
    /*/
     audioBufferData = [NSData dataWithBytes:bufferList->mBuffers[0].mData length:bufferList->mBuffers[0].mDataByteSize];
     //*/
 dispatch_async(dispatch_get_main_queue(), ^{
 });
}
-(AudioBufferList*)output:(EZOutput *)output needsBufferListWithFrames:(UInt32)frames withBufferSize:(UInt32 *)bufferSize{
    //*
    return abufferlist;
    /*/
     //    int bSize = 128;
     //    AudioBuffer audioBuffer;
     //    audioBuffer.mNumberChannels = 1;
     //    audioBuffer.mDataByteSize = bSize;
     //    audioBuffer.mData = malloc(bSize);
     ////    [audioBufferData getBytes:audioBuffer.mData length:bSize];
     //    memcpy(audioBuffer.mData, [audioBufferData bytes], bSize);
     //
     //
     //    AudioBufferList *bufferList = [EZAudio audioBufferList];
     //    bufferList->mNumberBuffers = 1;
     //    bufferList->mBuffers[0] = audioBuffer;
     //
     //    return bufferList;
    //*/


}

I know that value of bSize in output:needsBufferListWithFrames:withBufferSize: maybe changed.

My main gole is create light as much as it can be mono sound, encode it to nsdata and decode it to output. Could You suggest me what I'm doing wrong?

Błażej
  • 3,617
  • 7
  • 35
  • 62

1 Answers1

0

I had the same issue, moved to AVAudioRecorder and set the parameters i needed i kept EZAudio (EZMicrophone) for audio visualisation here is a link to achieve this :

iOS: Audio Recording File Format

Community
  • 1
  • 1
rony_y
  • 535
  • 1
  • 8
  • 26