1

I have been struggling with this yesterday and would really appreciate the help.

I have a multi channel mixer audio unit and the callback assigned to each channel fills the needed audio buffer when called. I am trying to record within the same callback by writing the data to a file.

At the moment the audio records as noise if I dont call AudioUnitRender and if I do call it I get two errors. Error 10877 and error 50.

the recording code in the callback looks like this

if (recordingOn) 
{
    AudioBufferList *bufferList = (AudioBufferList *)malloc(sizeof(AudioBuffer));

    SInt16 samples[inNumberFrames]; 
    memset (&samples, 0, sizeof (samples));

    bufferList->mNumberBuffers = 1;
    bufferList->mBuffers[0].mData = samples;
    bufferList->mBuffers[0].mNumberChannels = 2;
    bufferList->mBuffers[0].mDataByteSize = inNumberFrames*sizeof(SInt16);

    OSStatus status;
    status = AudioUnitRender(audioObject.mixerUnit,     
                             ioActionFlags, 
                             inTimeStamp, 
                             inBusNumber, 
                             inNumberFrames, 
                             bufferList);

    if (noErr != status) {
        printf("AudioUnitRender error: %ld", status); 
        return noErr;
    }

    ExtAudioFileWriteAsync(audioObject.recordingFile, inNumberFrames, bufferList);
}

Is it correct to write the data in each channels callback or should I connect it to the remote I/O unit?

I am using LPCM and the ASBD for the recording file (caf) is

recordingFormat.mFormatID = kAudioFormatLinearPCM;
recordingFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger |
kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsPacked;
recordingFormat.mSampleRate = 44100;
recordingFormat.mChannelsPerFrame = 2;
recordingFormat.mFramesPerPacket = 1;
recordingFormat.mBytesPerPacket = recordingFormat.mChannelsPerFrame * sizeof (SInt16);
recordingFormat.mBytesPerFrame = recordingFormat.mChannelsPerFrame * sizeof (SInt16);
recordingFormat.mBitsPerChannel = 16;

I am not really sure what I am doing wrong.

How does stereo effect the way the recorded data must be handled before writing to the file?

some_id
  • 29,466
  • 62
  • 182
  • 304

2 Answers2

2

There are a couple of issue's. If you are trying to record your final "mix" you can add a callback on the I/O unit using AudioUnitAddRenderNotify(iounit,callback,file). The callback then simply takes the ioData and passes it to ExtAudioFileWriteAsync(...). So, you don't need to create any buffers too. A side note :allocating memory in the render thread is bad. You should avoid all system calls in the render callback. There is no guarantee these calls will execute within the very tight deadline the audio thread has. Hence why there is a ExtAudioFileWriteAsync, it takes this into consideration and writes to disk in another thread.

soh-la
  • 360
  • 1
  • 4
  • I really appreciate the help. These audio questions get less and less views. Ok, I will add that and see how it goes. So the in this added recording callback I can just pass &ioData to the writeAsync call? Regarding allocation in callback, yeah its really bad, my head is a mess at the moment, need to break it down step by step. I also do this in my callback [[NSNotificationCenter defaultCenter] postNotificationName: MixerAudioObjectSampleFinishedPlayingNotification object:nil]; in order to set the finished channel to silent with kMultiChannelMixerParam_Enable to False. This is bad right? – some_id Jul 18 '12 at 12:39
  • Yes, that's bad. A common approach is to poll in your main thread and listen/watch your data. And that data should be a c struct or c++ object. And yes, the ioData has the rendered audio in the bufferlist. – soh-la Jul 19 '12 at 23:15
0

I found a demo codes, maybe useful 4 U;

DEMO URL:https://github.com/JNYJdev/AudioUnit

OR

blog: http://atastypixel.com/blog/using-remoteio-audio-unit/

static OSStatus recordingCallback(void *inRefCon, 
                          AudioUnitRenderActionFlags *ioActionFlags, 
                          const AudioTimeStamp *inTimeStamp, 
                          UInt32 inBusNumber, 
                          UInt32 inNumberFrames, 
                          AudioBufferList *ioData) {
// Because of the way our audio format (setup below) is chosen:
// we only need 1 buffer, since it is mono
// Samples are 16 bits = 2 bytes.
// 1 frame includes only 1 sample

AudioBuffer buffer;

buffer.mNumberChannels = 1;
buffer.mDataByteSize = inNumberFrames * 2;
buffer.mData = malloc( inNumberFrames * 2 );

// Put buffer in a AudioBufferList
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;

// Then:
// Obtain recorded samples

OSStatus status;

status = AudioUnitRender([iosAudio audioUnit], 
                     ioActionFlags, 
                     inTimeStamp, 
                     inBusNumber, 
                     inNumberFrames, 
                     &bufferList);
checkStatus(status);

// Now, we have the samples we just read sitting in buffers in bufferList
// Process the new data
[iosAudio processAudio:&bufferList];

// release the malloc'ed data in the buffer we created earlier
free(bufferList.mBuffers[0].mData);

return noErr;
}
JNYJ
  • 515
  • 6
  • 14