It's possible to dynamically change your graph layout by using .connect()
and .disconnect()
, even when audio is playing or sent through a stream (which could even be streamed over WebRTC).
I couldn't find a reference in the spec, so I'm pretty sure this is taken for granted.
For example, if you have two AudioBufferSourceNodes bufferSource1
and bufferSource2
, and a MediaStreamAudioDestinationNode streamDestination
:
bufferSource1.connect(streamDestination);
//do some other things here, and after some time, switch to bufferSource2:
//(streamDestination doesn't need to be explicitly specified here)
bufferSource1.disconnect(streamDestination);
bufferSource2.connect(streamDestination);
Example in action.
Edit 1:
Proper implementation:
According to the Editors Draft on the Audio Output API, it is planned/will be possible to choose a custom audio output device for the AudioContext as well (by means of new AudioContext({ sinkId: requestedSinkId });
). I couldn't find any info on the progress, and even found a related discussion which the asker apparently read already. According to this and (many) other references, it doesn't seem te be an easy task, but it's planned for WA V1.
Edit:
That section has been removed from the API Draft, but you can still find it in an older version.
Current workaround:
I played around with your workaround (using a MediaStreamAudioDestinationNode
and Audio
object), and it seems to be related to nothing being connected. I modified my example to toggle a single buffer (similar to your example but with an AudioBufferSourceNode
), and observed a similar frequency drop. However, when using a GainNode
inbetween and setting it's gain.value
to either 0
or 1
, the frequency drops disappeared (this isn't gonna be the solution if you want to create and connect new AudioBuffer
s dynamically).