0

In my app, I'm doing audio processing in the render callback (input only, no output). Here is how I initialize the audio :

-(void) initAudio {

OSStatus status;

NewAUGraph(&graph);

AudioComponentDescription desc;
desc.componentType          = kAudioUnitType_Output;
desc.componentSubType       = kAudioUnitSubType_RemoteIO;
desc.componentFlags         = 0;
desc.componentFlagsMask     = 0;
desc.componentManufacturer  = kAudioUnitManufacturer_Apple;

AUNode ioNode;

status = AUGraphAddNode(graph, &desc, &ioNode);
checkStatus(status, "At adding node");

AUGraphOpen(graph);

AUGraphNodeInfo(graph, ioNode, NULL, &audioUnit);

//Enable IO for recording
UInt32 enableInput = 1;
status = AudioUnitSetProperty(audioUnit,
                              kAudioOutputUnitProperty_EnableIO,
                              kAudioUnitScope_Input,
                              kInputBus,
                              &enableInput,
                              sizeof(enableInput));
checkStatus(status, "At setting property for input");

//Disable playback
UInt32 enableOutput = 0;
status = AudioUnitSetProperty(audioUnit,
                              kAudioOutputUnitProperty_EnableIO,
                              kAudioUnitScope_Output,
                              kOutputBus,
                              &enableOutput,
                              sizeof(enableOutput));
checkStatus(status, "At setting property for input");

// ASBD
AudioStreamBasicDescription audioFormatIn;
audioFormatIn.mSampleRate         = SampleRate;
audioFormatIn.mFormatID           = kAudioFormatLinearPCM;
audioFormatIn.mFormatFlags        = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormatIn.mFramesPerPacket    = 1;
audioFormatIn.mChannelsPerFrame   = 1;
audioFormatIn.mBitsPerChannel     = 16;//sizeof(AudioSampleType) * 8;
audioFormatIn.mBytesPerPacket     = 2 * audioFormatIn.mChannelsPerFrame;
audioFormatIn.mBytesPerFrame      = 2 * audioFormatIn.mChannelsPerFrame;

//Apply format
status = AudioUnitSetProperty(audioUnit,
                              kAudioUnitProperty_StreamFormat,
                              kAudioUnitScope_Output,
                              kInputBus,
                              &audioFormatIn,
                              sizeof(audioFormatIn));
checkStatus(status,"At setting property for AudioStreamBasicDescription for input");


//Set up input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = recordingCallback;
callbackStruct.inputProcRefCon = (__bridge void *)self;

status = AudioUnitSetProperty(audioUnit,
                              kAudioOutputUnitProperty_SetInputCallback,
                              kAudioUnitScope_Global,
                              kInputBus,
                              &callbackStruct,
                              sizeof(callbackStruct));
checkStatus(status,"At setting property for recording callback");

// Disable buffer allocation for the recorder
UInt32  flag = 0;
status = AudioUnitSetProperty(audioUnit,
                              kAudioUnitProperty_ShouldAllocateBuffer,
                              kAudioUnitScope_Output,
                              kInputBus,
                              &flag,
                              sizeof(flag));
checkStatus(status, "At set property should allocate buffer");

// Allocate own buffers
tempBuffer.mNumberChannels  = 1;
tempBuffer.mDataByteSize    = 1024 * 2;
tempBuffer.mData            = malloc( 1024 * 2 );

status = AUGraphInitialize(graph);
checkStatus(status,"At AUGraph Initalize");
}

Now, I want to add a high pass filter or band pass filter to the input audio before processing it in the render callback. So I think I should add something like this :

desc.componentType          = kAudioUnitType_Effect;
desc.componentSubType       = kAudioUnitSubType_BandPassFilter;
desc.componentFlags         = 0;
desc.componentFlagsMask     = 0;
desc.componentManufacturer  = kAudioUnitManufacturer_Apple;

But I didn't manage to create / connect the nodes properly to make this work... Thanks for your help !

jcr
  • 303
  • 1
  • 6
  • 16
  • Is your render chain working without the band pass? – dave234 Nov 24 '15 at 21:52
  • @Dave I used to utilize classic audio unit initialization, but I started to switch to AUGraph. My code above seems to work even if I'm not sure I did it properly. – jcr Nov 25 '15 at 09:04

1 Answers1

0

When I wanted to do a similar thing, I found your question - unfortunately without any solution :-( Now a few days later a managed to do it. So this my working solution, for everybody struggling with the same problem:

  1. Prepare the Remote_IO's input as a STANDALONE audio unit for recording audio.

  2. Add a callback to the input. I'll call it the "Mic Callback".

    (NOTE: Apple calls this as an input callback, whereas in fact, this is just a NOTIFICATION callback, to tell your app that samples are available, and you have to "ask" the input unit explicitely to render them...)

  3. Establish an AU Graph: converter unit -> filter -> generic output

    (NOTE: generic output can convert, however the filter not. So if you want to feed in the chain anything else than the 8.24 format, then you need the converter.)

  4. Add a callback to the input of the converter unit. I'll call this as the "Process Callback".

Now the key point is that the regular calls by the operating system to the Mic Callback will drive the whole processing chain:

  1. In the Mic Callback, prepare a buffer, which is as large as the number of input samples available to render by the mic, and "ask" the GENERIC OUTPUT (!) to render - the same amount of - samples (by calling AudioUnitRender) into it. NOTE that you're not asking the input unit to render but the output unit, at the end of the graph!

  2. The Generic Output will forward the render request until it reaches the input callback of the converter unit, i.e. the Process Callback. In this one, you get as in input parameter a pointer to a buffer, where you have to fill in the requested number of samples. At this point, ask the INPUT UNIT to render samples exactly onto this buffer.

And viola, you're done. You have just to start/stop the mic unit!

The key in the whole process is that always an output unit have to drive the whole rendering process and as the generic output don't have any regular "need" to get samples, you have to ask it manually to render samples. And this has to be synchronized to the A/D converter of the mic, which wants to put out at regular time intervals samples.

Theoretically you could chain the input unit and your graph, but there are two problems:

  1. The mic input unit counts as an output unit and there cannot be two output units in a graph...

  2. When to units are connected together then there cannot in between a callback. If you put one there, it will never be called... So you would put the callback at the end of the chain and expect that the "samples are available" notification will we propagated from the output of the mic unit to the output of the generic output. However it will not. Therefore you have to separate the process into the mic input unit and the processing chain.

Final note: if you want directly render the mic's samples into the converter unit then you have to set the same stream format (preferably the canonical output format of the remote io (i.e. 16 bit, integer) to the input of the converter unit and to the output of the mic unit (i.e. input scope of the remote io, bus 1).

Community
  • 1
  • 1