-1

I would like to write a program that "arranges" multiple audio buffers. Similar to a DAW, I want to "layer" audio tracks on top of each other at custom timestamps. Is this even possible, and if so, how would I implement it?

I apologize in advance for this extremely general question, but I couldn't find any good resources on this topic. Thanks for the help!

Kento Nishi
  • 578
  • 1
  • 9
  • 24

1 Answers1

1

Yes, this is possible.

First, load your audio data into AudioBuffer instances.

Next, you want to create an AudioContext, which is basically the root of a graph of connected nodes where your audio flows about.

Now, create an AudioBufferSourceNode for each AudioBuffer and connect it to the audio context's destinationNode. This basically plugs a buffer player directly to the output.

From there, you can call .start() on your AudioBufferSourceNode instances to play them back immediately, or schedule them to play at some point in the future.

Brad
  • 159,648
  • 54
  • 349
  • 530
  • Is it possible to generate the result without "playing" the audio? I would like to export the AudioBuffer instead. – Kento Nishi Jul 11 '19 at 19:29
  • @KentoNishi Yes, an OfflineAudioContext is used for this, and can process your audio faster than realtime. https://developer.mozilla.org/en-US/docs/Web/API/OfflineAudioContext – Brad Jul 11 '19 at 19:30
  • How would offsetting specific audio tracks work in this implementation? I would like to choose when each track starts in the resulting audio. – Kento Nishi Jul 11 '19 at 19:31
  • @KentoNishi Read the documentation for `.start()` that I linked to... – Brad Jul 11 '19 at 19:33
  • Sorry! I didn't really read :P – Kento Nishi Jul 11 '19 at 20:39