0

I've created a custom transcription and acoustic model for watson and it is tied to my voice agent correctly, but how can I ensure it is used when the voice agent is connected to via a phone call? Now it seems to only use the base agent and doesn't initiate the custom model.

I've successfully created the model, both text transcription and acoustic. Both have updated the narrow band us language model. using curl I can see them in the system and when I invoke them manually, they work. I don't know how to ensure they are invoked when voice agent is connected via SIP phone

  • I solved it. In the first dialogue node of the watson assistant used, you pass config parameters with JSON. You specify the base model, the language and acousitic ID... "vgwAction": { "command": "vgwActSetSTTConfig", "parameters": { "config": { "model": "en-US_NarrowbandModel", "customization_id": "{enter your unique language model ID}", "profanity_filter": true, "smart_formatting": true, "acoustic_customization_id": "{enter your unique acoustic model ID}", "x-watson-learning-opt-out": true }, – Joe Pierce Jul 21 '19 at 14:53

1 Answers1

0

I solved it. In the first dialogue node of the watson assistant used, you pass config parameters with JSON. You specify the base model, the language, and acousitic ID example below.

"vgwAction": { "command": "vgwActSetSTTConfig", "parameters": { "config": { "model": "en-US_NarrowbandModel", "customization_id": "{enter your unique language model ID}", "profanity_filter": true, "smart_formatting": true, "acoustic_customization_id": "{enter your unique acoustic model ID}", "x-watson-learning-opt-out": true }, –
double-beep
  • 5,031
  • 17
  • 33
  • 41