2

I'm writing an application where I should play parts of audio files. Each audio file contains audio data for a separate track. These parts are sections with a begin time and a end time, and I'm trying to play those parts in the order I choose.

So for example, imagine I have 4 sections :

A - B - C - D

and I activate B and D, I want to play, B, then D, then B again, then D, etc..

To make smooth 'jumps" in playback I think it's important to fade in/out start/end sections buffers.

So, I have a basic AVAudioEngine setup, with AVAudioPlayerNode, and a mixer. For each audio section, I cache some information :

  • a buffer for the first samples in the section (which I fade in manually)
  • a tuple for the AVAudioFramePosition, and AVAudioFrameCount of a middle segment
  • a buffer for the end samples in the audio section (which I fade out manually)

now, when I schedule a section for playing, I say the AVAudioPlayerNode :

  • schedule the start buffer (scheduleBuffer(_:completionHandler:) no option)
  • schedule the middle segment (scheduleSegment(_:startingFrame:frameCount:at:completionHandler:))
  • finally schedule the end buffer (scheduleBuffer(_:completionHandler:) no option)

all at "time" nil.

The problem here is I can hear clic, and crappy sounds at audio sections boundaries and I can't see where I'm doing wrong. My first idea was the fades I do manually (basically multiplying sample values by a volume factor), but same result without doing that. I thought I didn't schedule in time, but scheduling sections in advance, A - B - C for example beforehand has the same result. I then tried different frame position computations, with audio format settings, same result.

So I'm out of ideas here, and perhaps I didn't get the schedule mechanism right.

Can anyone confirm I can mix scheduling buffers and segments in AVAudioPlayerNode ? or should I schedule only buffers or segments ? I can confirm that scheduling only segments works, playback is perfectly fine.

A little context on how I cache information for audio sections.. In the code below, file is of type AVAudioFile loaded on disk from a URL, begin and end are TimeInterval values, and represent the start/end of my audio section.

let format = file.processingFormat

let startBufferFrameCount: AVAudioFrameCount = 4096
let endBufferFrameCount: AVAudioFrameCount = 4096

let audioSectionStartFrame = framePosition(at: begin, format: format)
let audioSectionEndFrame = framePosition(at: end, format: format)

let segmentStartFrame = audioSectionStartFrame + AVAudioFramePosition(startBufferFrameCount)
let segmentEndFrame = audioSectionEndFrame - AVAudioFramePosition(endBufferFrameCount)

startBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: startBufferFrameCount)
endBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: endBufferFrameCount)

file.framePosition = audioSectionStartFrame
try file.read(into: startBuffer)

file.framePosition = segmentEndFrame
try file.read(into: endBuffer)

middleSegment = (segmentStartFrame, AVAudioFrameCount(segmentEndFrame - segmentStartFrame))
frameCount = AVAudioFrameCount(audioSectionEndFrame - audioSectionStartFrame)

Also, the framePosition(at:format:) multiplies the TimeInterval value by the sample rate of the AVAudioFormat passed in.

I cache this information for every audio section, but I hear clicks at section boundaries, no matter if I schedule them in advance or not. I also tried not mixing buffer and segments when scheduling, but I doesn't change anything, so I start thinking I'm doing wrong frame computations.

Vince
  • 525
  • 1
  • 3
  • 19

0 Answers0