3

I'm using the Web Speech API and wondering if it is possible to have two instances of SpeechSynthesisUtterance() running at the same time such that the voices are layered on top of each other.

Abbreviating my current code I essentially have two functions each defining a new instance of SpeechSynthesisUtterance() and then call them both. However, the resulting dictation alternates between the two instances such that if voice 1 is speaking "boom, chicka" and voice 2 is speaking "bow, wow" what I hear is "boom, bow, chicka, wow" rather than "boom + bow, chicka + wow".

function speak(text) {
// Create a new instance of SpeechSynthesisUtterance.
var msg = new SpeechSynthesisUtterance();
//some code here where I define parameters like volume, pitch which I left out

window.speechSynthesis.speak(msg);
}
function speak2(text2) {
// Create another new instance of SpeechSynthesisUtterance.
var msg2 = new SpeechSynthesisUtterance();
//some code here where I define parameters like volume, pitch which I left out

window.speechSynthesis.speak(msg2);
}

speak(text);
speak2(text2);
Spencer Pope
  • 455
  • 6
  • 23

1 Answers1

0

The MDN documentation on window.speechSynthesis.speak() says

The speak() method of the SpeechSynthesis interface adds an utterance to the utterance queue; it will be spoken when any other utterances queued before it have been spoken.

So I guess that's a no.
(If you want to get really into it, here is the W3 spec - but it says the same thing)

For the meantime, I am using an external TTS service based on audio-files. These are less limited in parallelism.

neo post modern
  • 2,262
  • 18
  • 30