I have a dialog agent created through DialogFlow. I want to have a conversation with this agent on a Google Home device.
The problem:
The dialogflow API (ex. dialogflow-nodejs-client-v2) gives full access to agents built in DialogFlow. Most importantly, users can interact with the system either through text input or speech input (as a .wav file or an audio stream). When you send a request to the DialogFlow agent (ex. detect intent from audio), it returns this happy response object, which crucially includes a "speechRecognitionConfidence".
But! When interacting with the dialogue agent through a GoogleAssistant App, the request object sent to a webhook is missing the "speechRecognitionConfidence" value. This means that:
- I don't have the input audio
- I don't have the ASR confidence
Questions:
- Is it possible to send the ASR confidence (and any other useful info) to a webhook?
- Is there another way to access the ASR confidence (ie by making an API call)?
- Is there a way to run a program built using the dialogflow API on a Google Home (or through the google assistant)?
Thank you in advance for any help. I've been struggling through endless documentation without success.