1

I am creating a Google Assistant app for telling quotes, I am currently using Api.ai with ApiAi NodeJs webhook. I wanted that my response should be in this way:

Innovation is the only way to win.
By Steve Jobs
Want one more?

Note that all the three lines are different lines. I know it is possible if I just use api.ai's ux without webhook (using multiple Simple Response) but I cannot figure out how to do it when combined with webhook.

I tried:

assistant.ask("Innovation is the only way to win.");
assistant.ask("By Steve Jobs");
assistant.ask("Want one more?");

But it seems to speak only the first sentence. I also tried by replace it with:

assistant.tell("Innovation is the only way to win.");
assistant.tell("By Steve Jobs");
assistant.ask("Want one more?");

But it exits just after the first statement. How to do it?

Prisoner
  • 49,922
  • 7
  • 53
  • 105
frunkad
  • 2,433
  • 1
  • 23
  • 35

2 Answers2

5

Both ask() and tell() take their parameters and send back a response. The only difference is that ask() keeps the conversation going, expecting the user to say something back, while tell() indicates the conversation is over. If you think of this in terms of a web server, both ask() and tell() send back the equivalent of a page and then close the connection, but ask() has included a form on the page, while tell() has not.

Both of them can take a RichResponse object, which may include one or two strings or SimpleResponse objects which will be rendered as chat bubbles. You can't do three, however, at least not according to the documentation. So it sounds like your best bet will be to include one SimpleResponse with the quote and attribution, and the second with the prompt for another.

This also sounds like a case where you want the audio to be different than the displayed text. In this case, you'd want to build the SimpleResponse so it has both speech fields and displayText fields.

That might look something like this (tho I haven't tested the code):

var simpleResponse = {
  speech: 'Steve Jobs said "Innovation is the only way to win."',
  displayText: '"Innovation is the only way to win." -- Steve Jobs'
};
var richResponse = assistant.buildRichResponse();
richResponse.addSimpleResponse(simpleResponse);
richResponse.addSimpleResponse('Do you want another?');
assistant.ask( richResponse );

This will also let you do things like add cards in the middle of these two blurbs that could, for example, contain a picture of the person in question. To do this, you'd call the richResponse.addBasicCard() method with a BasicCard object. This might even be better visually than including the quote attribution on a second line.

As for design - keep in mind that you're designing for a wide range of devices. Trying to focus on the line formatting when you have display modes that are different (and sometimes non-existent) is of questionable design. Don't try to focus on what the conversation will look like, instead you should focus on how much the conversation feels like a conversation your user will have with another person. Remember that voice is the primary means of this conversation with visual intended to supplement that conversation, not rule it.

Prisoner
  • 49,922
  • 7
  • 53
  • 105
  • Thank you very much but I faced a problem while deploying, the code `assistant.ask()` send a `Default Response` which only api.ai is processing but when testing, it shows up an error. Further more I changed the code to `function quote_start(assistant){ assistant..ask("Hello"); }` Even this is not working in testing(in phone/AoG console) But it giving default response in api.ai . Earlier today I faced some error, while deploying, the code was actually not being changed - no idea why. Why is all this happening all of a sudden ? – frunkad Aug 06 '17 at 13:17
  • It sounds like these questions are different from, and unrelated to, the original question. If the answer has helped, an upvote and accepting the answer are appreciated. If you have new questions - go ahead and open up a new StackOverflow question with as much information as you can provide. – Prisoner Aug 06 '17 at 13:22
1

From what I can gather from the documentation, .tell and .ask both close the mic. Try putting all of your statements into one string. As far as I can tell, .ask doesn't actually affect the tone of the speech; it just tells Assistant to wait for input.

assistant.ask("Innovation is the only way to win. By Steve Jobs. Want one more?");
Katie
  • 11
  • 1
  • .ask actually opens the mic, and putting it in one string, umm, it's not what I want. Thanks. P.S. because I am putting quote of Steve Jobs, I have to make sure it has a good design ;) – frunkad Aug 05 '17 at 16:28
  • I know `.ask` opens the mic, but `.tell` and `.ask` both close the mic after they run. Is there any particular reason you don't want to have everything in one string? – Katie Aug 05 '17 at 16:33
  • Design goal is the only reason. It would, in every way, look better. – frunkad Aug 05 '17 at 16:35
  • It may look better, but it isn't possible with the way the assistant API works. You could append multiple strings or use line breaks to clean it up, but any other way of doing this would look much messier. – Katie Aug 05 '17 at 16:36