0

I have an action defined like this in my actions.json:

{
  "description": "foo description",
  "name": "FooAction",
  "fulfillment": {
    "conversationName": "my-app"
  },
  "intent": {
    "name": "FooIntent",
    "trigger": {
      "queryPatterns": [
        "foo",
      ]
    }
  }
}

When triggering adding actions.intent.MAIN my server response looks like this:

{
  "expectUserResponse": true,
  "expectedInputs": [
    {
      "inputPrompt": {
        "richInitialPrompt": {
          "items": [
            {
              "simpleResponse": {
                "textToSpeech": "Welcome to My App! What would you like to do?",
                "displayText": "Welcome to My App! What would you like to do?"
              }
            }
          ],
          "suggestions": []
        }
      },
      "possibleIntents": [
        {
          "intent": "FooIntent"
        }
      ]
    }
  ],
  "conversationToken": "123"
}

The question:

Why do I only get back the actions.intent.TEXT intent, when a user says "Talk to My App" then responds "foo"?

However, when a user says "Ask My App to foo" (without triggering actions.intent.MAIN) I get the FooIntent.

What am I doing wrong? Thanks!

Travis
  • 1,463
  • 11
  • 12

1 Answers1

1

You're not doing anything wrong - this is exactly how things work when you're using the actions.json and the Actions SDK. Custom intents are only used for initial triggering and, to a lesser degree, for speech biasing.

When used as part of the initial triggering, you will get the intent that matched back. But you'll only get a custom intent name as part of the initial intent.

For later intents, you will generally get the actions.intent.TEXT intent (the exceptions being if you use an option or some other internal response type). Requesting one of the custom intents will help shape how the STT interpreter will handle the speech, but it will still be returned as actions.intent.TEXT.

Typically - this is the desired behavior. The Actions SDK is primarily used when you already have a Natural Language Processor and you mostly want just the text sent to that NLP. Your NLP will then determine what action to take based on the text.

If you don't have an NLP, I would suggest using one. Dialogflow is directly supported through the Actions console, but most NLPs these days describe how to use them with Actions on Google if you wish to use a different one.

Prisoner
  • 49,922
  • 7
  • 53
  • 105
  • Thanks for answering! I also got an email from a support ticket I filed saying this feature isn't available "yet" -- I hope that means it will be soon! – Travis Jan 19 '18 at 16:34
  • I can't comment on what Google will do (since I don't know), but it doesn't seem likely. Is there a reason you want to handle it through the actions.json and not through another tool that is better equipped to do so? – Prisoner Jan 19 '18 at 18:00