3

I am trying to create an app using a combination of AVAudioPlayerNode instances and other AUAudioUnits for EQ and compression etc. Everything connects up well and using the V3 version of the API certainly makes configuration easier for connecting nodes together. However during playback I would like to be able to automate parameter changes such a the gain on a mixer so that the changes are ramped (eg. fade out or fade in.) and feel confident that the changes are sample accurate.
One solution I have considered to install a tap on a node (perhaps the engine's mixer node) and within that adjust the gain for a given unit but since the tap is on the output of a unit this is always going to be too late to have the desired effect (I think) without doing of offset calculations and then delaying my source audio playback to match up to the parameater changes. I have also looked at the scheduleParameterBlock property on AUAudioUnit but it seems I would need to implement my own custom unit to make use of that rather than use built-in units even though it was mentioned in

WWDC session 508: " ...So the first argument to do schedule is a sample time, the parameter value can ramp over time if the Audio Unit has advertised it as being rampable. For example, the Apple Mixer does this. And the last two parameters, of course, are function parameters are the address of the parameter to be changed and the new parameter value.... "

Perhaps this meant that internally the Apple Mixer uses it and not that we can tap into any rampable capabilities. I can't find many docs or examples other than implementing a custom audio unit as in Apple's example attached to this talk.

Other potential solutions I have seen include using NSTimer, CADisplayLink or dispatchAfter... but these solutions feel worse and less sample accurate than offsetting from the installed tap block on the output of a unit.

I feel like I've missed something very obvious since there are other parts of the new AVAudioEngine API that make a lot of sense and the old AUGraph API allowed more access to sample accurate sequencing and parameter changing.

Phiter
  • 14,570
  • 14
  • 50
  • 84
Dallas Johnson
  • 1,546
  • 10
  • 13
  • Even if you use a AU render callback, you'd be able to apply control data with accuracy beyond (i.e. finer than) `inNumberFrames` either as approximation (interpolation) or with `inNumberFrames/sampleRate` delay, AFAIK. However, you may be able to programmatically set very short buffers, as short as 14 frames. – user3078414 Jun 25 '16 at 11:28
  • Hint: you may have a look at this [research in iOS timing](http://atastypixel.com/blog/experiments-with-precise-timing-in-ios/). – user3078414 Jun 25 '16 at 12:27
  • I see this is a month+ old, did you figure out a solution? – gmcerveny Aug 02 '16 at 22:03

1 Answers1

0

This is not as obvious as you'd hope it would be. Unfortunately in my tests, the ramp parameter on scheduleParameterBlock (or even the underlying AudioUnitScheduleParameters) simply doesn't do anything. Very odd for such a mature API.

The bottom line is that you can only set a parameter value within a single buffer, not at the sample level. Setting a parameter value at a sample time, will automatically ramp from the current value to the new value by the end of the containing buffer. There seems to be no way to disable this automatic ramping.

Longer fades have to be done in sections by setting fractional values across multiple buffers and keeping track of the fade's relative progress. In reality, for normal duration fades, this timing discrepancy is unlikely to be a problem because sample-accuracy would be overkill.

So to sum up, sample-level parameter changes seem to be impossible, but buffer-level parameter changes are easy. If you need to do very short fades (within a single buffer or across a couple of buffers) then this can be done at the sample-level by manipulating the individual samples via AURenderCallback.

martinjbaker
  • 1,424
  • 15
  • 23