1

Whenever the user invokes my agent then it shows a list of options to select and also a simple response but the agent first speaks the simple response and then shows the list

Actual

user: Ok google talk to my test app.
bot: Welcome to my test app, Here's the list of options to select. (WELCOME MESSAGE)
     Please select your preference (RESPONSE)
     <list appears> (LIST)

Expected

user: Ok google talk to my test app.
bot: Welcome to my test app, Here's the list of options to select. (WELCOME MESSAGE)
     <list appears> (LIST)
     Please select your preference. (RESPONSE)

Is it possible that the assistant first speaks the welcome message,shows the list and then speaks out the response after a certain delay?

Jordi
  • 3,041
  • 5
  • 17
  • 36
Vikas Patidar
  • 203
  • 1
  • 13

2 Answers2

1

No, showing the bubble after the list is not possible.

When you add a list to your response, the spoken text will always appear before the list. This is mainly due to the fact that the spoken/chat part of the conversation is separate from the visual part of your conversation. Even when adding the response after the list in your code, the displaying of rich response is controlled by Google.

Example:

enter image description here

  conv.ask('This is a list example.');
  // Create a list
  conv.ask(new List({
    title: 'List Title',
    items: {
      'SELECTION_KEY_ONE': {
        synonyms: [
          'synonym 1',
          'synonym 2',
          'synonym 3',
        ],
        title: 'Title of First List Item',
        description: 'This is a description of a list item.',
        image: new Image({
          url: 'https://storage.googleapis.com/actionsresources/logo_assistant_2x_64dp.png',
          alt: 'Image alternate text',
        }),
      },
      'SELECTION_KEY_TWO': {
        synonyms: [
          'synonym 4',
          'synonym 5',
          'synonym 6',
        ],
        title: 'Title of Second List Item',
        description: 'This is a description of a list item.',
        image: new Image({
          url: 'https://storage.googleapis.com/actionsresources/logo_assistant_2x_64dp.png',
          alt: 'Image alternate text',
        }),
      }
    }
  }));

  conv.ask("Please make your selection");

By the look of your example it seems as if you are trying to show the user a couple options on the screen to control the conversation, are you sure Suggestion Chips wouldn't be a better fit for this? These chips are intended to give the user options and are far easier to implement than a list.

Delaying the speech, not the bubble

If you don't want to go that way, what you could do, is add an delay in the spoken text via SSML, but this would only change the experience for people using your action via voice. It wouldn't change the display location of the speech bubble when using the Google Assistant on your phone. For anyone using your action without a screen, this could cause confusion because the speech is being delayed for a list, which is never going to show on their device since it has no screen.

Design in a voice first experience

In general it is a good practice to design your conversation around the voice only part of your conversation. By making your conversation dependable on a list, you limit the amount of platforms you can deploy your action to. A voice first approach to this problem could be to create intents for each option your action supports and opening your welcome intent with a generic message such as "How can I assist you?" and having a fallback intent in which you assist the user by speaking out the different options that they can use. This could be combined with Suggestion Chips to still give the guiding visuals that you desire.

It is a bit more work to implement, but it does give your bot a great more amount of flexibility in its conversation and the amount of platforms it can support.

Jordi
  • 3,041
  • 5
  • 17
  • 36
0

Add webhook to your action and use the Browsing Carousel JSON for the intent. Add simpleReponse node after the list items to add a response after list is displayed. Sample JSON for Browsing Carousel:

{
  "payload": {
    "google": {
      "expectUserResponse": true,
      "richResponse": {
        "items": [
          {
            "simpleResponse": {
              "textToSpeech": "Here's an example of a browsing carousel."
            }
          },
          {
            "carouselBrowse": {
              "items": [
                {
                  "title": "Title of item 1",
                  "openUrlAction": {
                    "url": "https://example.com"
                  },
                  "description": "Description of item 1",
                  "footer": "Item 1 footer",
                  "image": {
                    "url": "https://storage.googleapis.com/actionsresources/logo_assistant_2x_64dp.png",
                    "accessibilityText": "Image alternate text"
                  }
                },
                {
                  "title": "Title of item 2",
                  "openUrlAction": {
                    "url": "https://example.com"
                  },
                  "description": "Description of item 2",
                  "footer": "Item 2 footer",
                  "image": {
                    "url": "https://storage.googleapis.com/actionsresources/logo_assistant_2x_64dp.png",
                    "accessibilityText": "Image alternate text"
                  }
                }
              ]
            }
          }
        ]
      }
    }
  }
}

Refer to https://developers.google.com/assistant/conversational/rich-responses#df-json-basic-card

Om Mishra
  • 52
  • 9