0

I have a Google Assistant recycling app in test whose introductory scene prompts a user for an item to be recycled. The app should tell the user how to dispose of the item. The introductory scene has 11 user intents. Nine of these process the input item to return a response specific to the item. #10 is a catchall if #1-9 fail to fire. It calls a webhook that looks up the input item in a JSON array and returns a result. #11 is a Help intent.

The nine item intents have between 11 and 51 training phrases each that should respond to prompts such as "What do you have" or "What it the item". The phrases include 10 variations along the lines "I have..." or "it's a ..." or "A ...". In testing some far, the input item works as intended with one exception: the "Some ..." input.

If "Some xxx" is input by keyboard or voice and the xxx in in a Type associated with one of the nine user intents, the input is processed correctly.

But if the xxx is not in those types associated with one of the nine user intents, the input is not always processed correctly. It should drop out to the webhook and in some cases it does. In other cases, Test Results shows that the input calls the third intent of nine (incorrect), and immediately returns to ask for another item without adding the prompt that should be added to the prompt queue when the third intent is entered, nor progressing to calling the webhook.

Some of these failed examples: "Some video" fails but "video" is handled correctly. "Some acid" fails but "It's acid" works. "Some audio" and "I have some audio" both work. The failure seems to be random by item and only if "Some" is the first word of the input.

Could this be mistaking "Some" for "Sum"?

  • Does this behavior occur both via voice and keyboard inputs? That may help check if it's a speech recognition issue. – Nick Felker Jan 27 '21 at 17:23
  • Yes, as I said, it occurs on both voice and keyboard input. If the same phrase is input by keyboard and fails and is immediately input by voice the voice input also fails. If the failing inputs are separated by several correct inputs, they still fail individually. Strange eh? – John Murphy Jan 27 '21 at 22:39
  • It seems to be a general issue with your intents, perhaps they are overlapping far too much. Can you try consolidating the intents into one, to ease the training phrases, then use the webhook to capture the type of item and process it differently in different cases? – Nick Felker Jan 27 '21 at 22:54
  • My aim was to leave as much of the input classification logic to Assistant in the hope that it would learn and improve over time. After all, it is supposed to be Artificially Intelligent. Then the Webhook could be a simple database query for input that Assistant didn't find. Your idea of taking the nine intents and building that logic into nine cases in the webhook could be done in any language in far less time and effort. – John Murphy Jan 29 '21 at 00:19
  • But I wanted a. to learn Actions on Google; b. to stretch it with a real world app; c. have some fun despite Google's determined efforts to thwart my learning with their totally inadequate documentation. It seems as Google's sole interest is in Gamers. OK, rant over... – John Murphy Jan 29 '21 at 00:19
  • Can you try that and see if that works given the state of the platform as it stands today? – Nick Felker Feb 01 '21 at 17:26
  • In scanning the docs, I realized that some of my intent phrases did not meet the requirement that they be a sentence with a verb and a noun. I changed those intent prompts to output "It's a yyy" or "I have a zzz". I also gave up on the webhook because its input now became the input user phrase such as "I have a wine bottle" or "It's a wine bottle" which is too complex for a simple array search. I moved the webhook responses into user intents. So now I have 29 intents. – John Murphy Feb 02 '21 at 02:06
  • 1
    This seems to work in test so far. The logic handles "i hazzz xx wine bottle" correctly, and the type is also picked out correctly from "some xxx", "a few yyy". So, so far, so good. More testing needed! – John Murphy Feb 02 '21 at 02:08
  • The tested app works well on a Nest Mini with voice input with one exception: responding to a Scene/On Enter/Send Prompts/speech: with "Some ink" drops one out of the app and into Google News on the first news item for Inc magazine which is then read aloud. – John Murphy Feb 06 '21 at 17:32
  • I have noted before that speaking or typing "Some xxxx" causes unexpected behavior with some xxxx items but not all. It seems as if input phrases beginning with "Some' are not fenced off within the app and escape into the general Assistant world. – John Murphy Feb 06 '21 at 17:35

0 Answers0