2

I'm trying to implement something similar to flute that when you blow the mic it's going to start play one note from the speaker. For this reason I was trying to use VoiceProcessingIO to substract playbacked note (output speaker) from input mic.

For some reason I noticed VoiceProcessingIO doesn't work together with AUSampler. I modified Apple Sampled "LoadPresetDemo" and just changed line:

cd.componentSubType = kAudioUnitSubType_RemoteIO 

with

cd.componentSubType = kAudioUnitSubType_VoiceProcessingIO;

and then you cannot hear anything played

I had the same effect (no sound) when I used the following scheme of audiounits:

VoiceProcessingIO (input mic) --> processing function (if audio level > thrs 
                                              then feed MusicDeviceMIDIEvent)

AUSampler --> VoiceProcessingIO (output speaker)

However when I used the scheme below:

VoiceProcessingIO (input mic) --> processing function (if audio level > thrs 
                                              then feed MusicDeviceMIDIEvent)

AUSampler --> RemoteIO (output speaker)

The output speaker volume was much lowered. The issue It seams is related to this: Is it safe to use two audio units for simultaneous I/O in iOS?

I've tested also example from here: https://code.google.com/p/ios-coreaudio-example/downloads/detail?name=Aruts.zip&can=2&q=

And it works with the following scheme without AUSampler:

VoiceProcessingIO (input mic) --> processing function --> VoiceProcessingIO (output speaker)

Questions is:

Is there any way to use VoiceProcessingIO along with AUSampler? Or is the only way to feed some data to VoiceProcessing output via render callback?

Community
  • 1
  • 1
pzo
  • 2,087
  • 3
  • 24
  • 42

0 Answers0