Since the old Webaudio scriptprocessor has been deprecated since 2014 and Audioworklets came up in Chrome 64 I decided to give those a try. However I'm having difficulties in porting my application. I'll give 2 examples from a nice article to show my point.
First the scriptprocessor way:
var node = context.createScriptProcessor(1024, 1, 1);
node.onaudioprocess = function (e) {
var output = e.outputBuffer.getChannelData(0);
for (var i = 0; i < output.length; i++) {
output[i] = Math.random();
}
};
node.connect(context.destination);
Another one that fills a buffer and then plays it:
var node = context.createBufferSource(), buffer =
context.createBuffer(1, 4096, context.sampleRate), data = buffer.getChannelData(0);
for (var i = 0; i < 4096; i++) {
data[i] = Math.random();
}
node.buffer = buffer;
node.loop = true;
node.connect(context.destination);
node.start(0);
The big difference between the two is the first one fills the buffer with new data during playback while the second one generates all data beforehand.
Since I generate a lot of data I can't do it beforehand. There's a lot of examples for the Audioworklet, but they all use other nodes, on which one can just run .start(), connect it and it'll start generating audio. I can't wrap my head around a way to do this when I don't have such a method.
So my question basically is how to do the above example in Audioworklet, when the data is generated continuously in the main thread in some array and the playback of that data is happening in the Webaudio thread.
I've been reading about the messageport thing, but I'm not sure that's the way to go either. The examples don't point me into that direction I'd say. What I might need is the proper way to provide the process function in the AudioWorkletProcesser derived class with my own data.
My current scriptprocessor based code is at github, specifically in vgmplay-js-glue.js.
I've been adding some code to the constructor of the VGMPlay_WebAudio class, moving from the examples to the actual result, but as I said, I don't know in which direction to move now.
constructor() {
super();
this.audioWorkletSupport = false;
window.AudioContext = window.AudioContext||window.webkitAudioContext;
this.context = new AudioContext();
this.destination = this.destination || this.context.destination;
this.sampleRate = this.context.sampleRate;
if (this.context.audioWorklet && typeof this.context.audioWorklet.addModule === 'function') {
this.audioWorkletSupport = true;
console.log("Audioworklet support detected, don't use the old scriptprocessor...");
this.context.audioWorklet.addModule('bypass-processor.js').then(() => {
this.oscillator = new OscillatorNode(this.context);
this.bypasser = new AudioWorkletNode(this.context, 'bypass-processor');
this.oscillator.connect(this.bypasser).connect(this.context.destination);
this.oscillator.start();
});
} else {
this.node = this.context.createScriptProcessor(16384, 2, 2);
}
}