The SpeechSynthesisUtterance
interface provides two options for setting what voice gets used: voice
and lang
.
lang
takes a language code like en-US
or es-ES
.
voice
takes in a SpeechSynthesisVoice
object that you get from speechSynthesis.getVoices()
.
If both are unset, then the browser's default is used. If lang
is unset, it just uses the voice
provided. If voice
is unset, it finds a SpeechSynthesisVoice
that matches lang
.
If both are set, but differ in what voice gets played. The lang
setting seems to be the overriding factor.
Do I need to have both set? Will something go wrong if I only set the voice
setting?