5

Since the old Webaudio scriptprocessor has been deprecated since 2014 and Audioworklets came up in Chrome 64 I decided to give those a try. However I'm having difficulties in porting my application. I'll give 2 examples from a nice article to show my point.

First the scriptprocessor way:

var node = context.createScriptProcessor(1024, 1, 1);
node.onaudioprocess = function (e) {
  var output = e.outputBuffer.getChannelData(0);
  for (var i = 0; i < output.length; i++) {
    output[i] = Math.random();
  }
};
node.connect(context.destination);

Another one that fills a buffer and then plays it:

var node = context.createBufferSource(), buffer = 
context.createBuffer(1, 4096, context.sampleRate), data = buffer.getChannelData(0);

for (var i = 0; i < 4096; i++) {
  data[i] = Math.random();
}

node.buffer = buffer;
node.loop = true;
node.connect(context.destination);
node.start(0);

The big difference between the two is the first one fills the buffer with new data during playback while the second one generates all data beforehand.

Since I generate a lot of data I can't do it beforehand. There's a lot of examples for the Audioworklet, but they all use other nodes, on which one can just run .start(), connect it and it'll start generating audio. I can't wrap my head around a way to do this when I don't have such a method.

So my question basically is how to do the above example in Audioworklet, when the data is generated continuously in the main thread in some array and the playback of that data is happening in the Webaudio thread.

I've been reading about the messageport thing, but I'm not sure that's the way to go either. The examples don't point me into that direction I'd say. What I might need is the proper way to provide the process function in the AudioWorkletProcesser derived class with my own data.

My current scriptprocessor based code is at github, specifically in vgmplay-js-glue.js.

I've been adding some code to the constructor of the VGMPlay_WebAudio class, moving from the examples to the actual result, but as I said, I don't know in which direction to move now.

constructor() {
            super();

            this.audioWorkletSupport = false;

            window.AudioContext = window.AudioContext||window.webkitAudioContext;
            this.context = new AudioContext();
            this.destination = this.destination || this.context.destination;
            this.sampleRate = this.context.sampleRate;

            if (this.context.audioWorklet && typeof this.context.audioWorklet.addModule === 'function') {
                    this.audioWorkletSupport = true;
                    console.log("Audioworklet support detected, don't use the old scriptprocessor...");
                    this.context.audioWorklet.addModule('bypass-processor.js').then(() => {
                            this.oscillator = new OscillatorNode(this.context);
                            this.bypasser = new AudioWorkletNode(this.context, 'bypass-processor');
                            this.oscillator.connect(this.bypasser).connect(this.context.destination);
                            this.oscillator.start();
                    });
            } else {
                    this.node = this.context.createScriptProcessor(16384, 2, 2);
            }
    }
Niek
  • 53
  • 1
  • 8
  • Can you post examples of what you have attempted? It is great to add the code from the docs, but more important to add the code you have attempted. Please see [How to Ask](https://stackoverflow.com/help/how-to-ask), read the [Tour](https://stackoverflow.com/tour), and especially read how to create a [Minimal, Complete, and Verifiable example](https://stackoverflow.com/help/mcve). – Tyler Feb 19 '18 at 21:25
  • Thank your for your response Tyler. I've added some information, I cannot add any more info since I don't know what to try next... – Niek Feb 19 '18 at 21:57

1 Answers1

3

So my question basically is how to do the above example in Audioworklet,

For your first example, there is already an AudioWorklet version for it: https://github.com/GoogleChromeLabs/web-audio-samples/blob/gh-pages/audio-worklet/basic/js/noise-generator.js

I do not recommend the second example (aka buffer stitching), because it creates lots of source nodes and buffers thus it can cause GC which will interfere with the other tasks in the main thread. Also discontinuity can happen at the boundary of two consecutive buffers if the scheduled start time does not fall on the sample. With that said, you won't be able to hear glitch in this specific example because the source material is noise.

when the data is generated continuously in the main thread in some array and the playback of that data is happening in the Webaudio thread.

The first thing you should do is to separate the audio generator from the main thread. The audio generator must run on AudioWorkletGlobalScope. That's the whole purpose of AudioWorklet system - the lower latency and the better audio rendering performance.

In your code, VGMPlay_WebAudio.generateBuffer() should be called in AudioWorkletProcessor.process() callback to fill the output buffer of the processor. That roughly matches what your onaudioprocess callback does.

I've been reading about the messageport thing, but I'm not sure that's the way to go either. The examples don't point me into that direction I'd say. What I might need is the proper way to provide the process function in the AudioWorkletProcesser derived class with my own data.

I don't think your use case requires MessagePort. I've seen other methods in the code but they really don't do much other than starting and stopping the node. That can be done by connecting/disconnecting AudioWorkletNode in the main thread. No cross-thread messaging necessary.

The code example at the end can be the setup for AudioWorklet. I am well aware that the separation between the setup and the actual audio generation can be tricky, but it will be worth it.

Few questions to you:

  1. How does the game graphics engine send messages to the VGM generator?
  2. Can the VGMPlay class live on the worker thread without any interaction with the main thread? I don't see any interaction in the code except for starting and stopping.
  3. Is XMLHttpRequest essential to the VGMPlay class? Or can that be done somewhere else?
hoch
  • 106
  • 3
  • Thanks for the response Hongchan!! The noise generator seems different to me because it uses the start method from the oscillator node in the main thread, part of my problem is that I don't know how to start up the whole worklet thing without having a source node like that, could you please tell me how I would do that? That's why I asked it the way I did; just create random noise in some array in the main thread without any node; how would I send that to the worklet? I guess the oscillator does it similar; create the sound in the main thread then send it to the worklet, right? – Niek Feb 21 '18 at 21:07
  • 1
    The callback thing is interesting, but how would I provide the generate function to the process function in the worklet? – Niek Feb 21 '18 at 21:09
  • I guess your point is valid; the whole audio generation might live in the worker thread, only issue it the vgm file itself, which contains the actual commands for the emulated chips to be inserted into the emulated soundchips. The result will be music. :) I load those through xmlhttprequest and put them in the Emscripten file system emulator. When the c code contained fopen the transpiled javascript will also just open the file, no modification of the original C code required. So I'm not sure I can do that in the Worklet thread? – Niek Feb 21 '18 at 21:16
  • So, the file system thing is difficult, the speed of the data generation in the main thread is not an issue (even works fine on Chrome on one of my not that fast Android phones); so let's say I want to keep it that way, I just want to use the worklet instead of the scriptprocessor. Will that provide me with lower CPU usage you think or any other advantage besides not using deprecated technology? – Niek Feb 21 '18 at 21:26
  • Question 2 & 3 are already answered then, the answer to 1 is that there is no graphics engine. The VGM format is basically just a log of all instructions send to a sound chip with a header. The main challenge is to extract those commands from a game. Full computer emulators are a nice way to do that, just save the data that gets send to the emulated chip. – Niek Feb 21 '18 at 21:31
  • You do not need to add a source node to make the noise generator work. The oscillator node is in the example is simply for the amplitude modulation. AudioWorkletNode will run just like ScriptProcessorNode once you connect its output to the context destination. – hoch Feb 21 '18 at 22:34
  • I am not familiar with VGM project, so please enlighten me - how do you load and the VGM code? Is it WASM? The fundamental is that you have to load the buffer generation code in AudioWorkletGlobalScope, and that can be done via `context.addModule()`. For the starter, you can inline all the WASM code in AudioWorkletGlobalScope. Then you can compile/instantiate from the WASM code. In the early days of AudioWorklet experiments, some developers inlined everything in AWGS because we did not have the MessagePort back then. – hoch Feb 21 '18 at 22:35
  • I have not tried xmlhttprequest in AWGS, but fetch() is currently disabled in the scope. Accessing the network stack can be a blocking operation so it can interfere with the audio rendering task. Personally I think XHR should not be allowed for the same reason. So that needs to be done in the main thread and sent via MessagePort. – hoch Feb 21 '18 at 22:44
  • I think the benefit of using AudioWorklet would be the "less janky" main thread. ScriptProcessorNode taps the main thread and it can be a major stress for the other tasks in the main thread (UI, DOM and etc). So out of this clean separation, you can get a smooth main thread and a glitch-free audio rendering. But it comes with a bit of re-arranging of the code. – hoch Feb 21 '18 at 22:48
  • Well, I guess I misunderstood the goal of this project. So this is only to play back the music? So there are no "frequent interaction" between the graphics engine and the audio engine? If all you have to deal with is starting and stopping the music, I think that simplifies the problem significantly. Why don't you create a PR in your repo so I can take a look at it? – hoch Feb 21 '18 at 22:51
  • The more I understand your use case, the more I realize this is a design problem. Since every component are tied on the main thread, separating them into two scope/threads would be the 90% of the work. – hoch Feb 21 '18 at 22:52