4

I'm working on an app that starts a number of loops simultaneously, and should maintain their sync.

Using previous, naive approaches to the problem (not using AVAudioEngine), I found that programmatically starting a number of audio players in sequence yielded enough delay between calls as to render the results useless; the beats were audibly out of sync.

Can I achieve this kind of functionality using AVAudioEngine?

Currently I'm wiring up AVAudioPlayerNodes to a mixer, and I've attached buffers to them and controlling input from there. But can I have them all start simultaneously?

It seems the nodes don't start producing sound until I call play, and that can't be done before the engine is started...

Darren Black
  • 1,030
  • 1
  • 9
  • 28

3 Answers3

5

For an even more in-depth explanation have a look at my answer in

AVAudioEngine multiple AVAudioInputNodes do not play in perfect sync


Here in short:

If your engine is already running you got a @property lastRenderTime in AVAudioNode - your player's superclass - This is your ticket to 100% sample-frame accurate sync...

AVAudioFormat *outputFormat = [playerA outputFormatForBus:0];

const float kStartDelayTime = 0.0; // seconds - in case you wanna delay the start

AVAudioFramePosition startSampleTime = playerA.lastRenderTime.sampleTime;

AVAudioTime *startTime = [AVAudioTime timeWithSampleTime:(startSampleTime + (kStartDelayTime * outputFormat.sampleRate)) atRate:outputFormat.sampleRate];

[playerA playAtTime: startTime];
[playerB playAtTime: startTime];
[playerC playAtTime: startTime];
[playerD playAtTime: startTime];

[player...

By the way - you can achieve the same 100% sample-frame accurate result with the AVAudioPlayer class...

NSTimeInterval startDelayTime = 0.0; // seconds - in case you wanna delay the start
NSTimeInterval now = playerA.deviceCurrentTime;

NSTimeIntervall startTime = now + startDelayTime;

[playerA playAtTime: startTime];
[playerB playAtTime: startTime];
[playerC playAtTime: startTime];
[playerD playAtTime: startTime];

[player...

With no startDelayTime the first 100-200ms of all players will get clipped off because the start command actually takes its time to the run loop although the players have already started (well, been scheduled) 100% in sync at now. But with a startDelayTime = 0.25 you are good to go. And never forget to prepareToPlay your players in advance so that at start time no additional buffering or setup has to be done - just starting them guys ;-)

Community
  • 1
  • 1
mramosch
  • 458
  • 4
  • 14
1

The right way to do it would be to use a mixer audio unit. Minimally you would have a graph with a mixer and a remoteIO.

Create a mixer with two inputs. The pull architecture of the iOS audio system will play your two audio buffers simultaneously.

jaybers
  • 1,991
  • 13
  • 18
0

I achieved the effect I was looking for by looping through the AVAudioPlayerNode instances which needed to be in sync, and for each one:

  1. Calling 'play'
  2. Scheduling the appropriate buffer to start at some time in the future

The code looks something like this:

    for (AVAudioPlayerNode *playerNode in playerNodes) {
        [playerNode play];
        [playerNode scheduleBuffer:loopBufferForThisPlayer atTime:startTime options:AVAudioPlayerNodeBufferLoops completionHandler:nil];
    }

The time specified in startTime is a time in the near future (300ms or so) to ensure all player nodes are ready to play. startTime must be based on the host time; base the time on your sample and things will be out of sync.

All these nodes were attached to an AVAudioMixerNode as described in the question.

I'm not keen on using a delay in this way; I'd prefer if my nodes told me when they were ready to go... but this works.

Darren Black
  • 1,030
  • 1
  • 9
  • 28