0

Say, for example that I am trying to make an app that recommends you a random movie. You should be able to talk to the app by pressing a button. Then the app will send your speech to a backend (a node app), which will then run the logic that will then send you back a random movie title.

This is my setup:

  • A simple story in Wit Console
    • User says I want a movie
    • Bot then calls searchRandomMovie function which produces movie context variable
    • Then bot says How about this: "{movie}" which uses movie variable
  • A node.js app that is running the wit.ai library much like Wit - Quick Start and using the token for story above.
    • Note: I can run the app locally in interactive mode and it will run the custom movie function after typing I want a movie and return the phrase with the movie title
  • iOS App running the Wit.ai SDK
    • I put the client token on the sdk
    • Was able to get the app to record my speech, send it to wit.ai and have it return something with a level of confidence but could not connect it to my custom function in the node.js app

I am trying to get the above setup to do the following: - Speak the I want a movie sentence to the iOS app, which will send voice to Wit.ai - Have Wit.ai read the sentence and determine that the searchRandomMovie function needs to be called AND delegate to the node app to run it - Have the node app run the searchRandomMovie function and return results all the way back to the iOS app - Have the iOS app display the How about this: "{movie}" string and maybe even speak back the whole sentence

Is this possible in the way I described it above? I am pretty sure there is something I am missing or that I am not getting. Unfortunately, it seems Wit.ai just updated their docs and do not seem to be very thorough with a scenario like this.

I am new to Wit.ai and any help with details will be helpful

Fabio Gomez
  • 687
  • 2
  • 7
  • 13

1 Answers1

1

Thanks for sharing. You're right our documentation needs to be improved a lot. the /converse endpoint (Bot Engine Beta) doesn't support speech as of now. So you will have to call /converse first and then retrieve the text and do another call to /converse. Hope this helps

l5t
  • 578
  • 2
  • 5
  • Hi, thanks for providing an answer. But can you provide a little more context to your answer? If I understand correctly, from the iPhone I would have to enter text to send over to /converse, which would then return context/intents/etc. Then based on context I could go ahead and call my function, get the data I need (movie title), and call /converse and give it the data I fetched (in proper context) so that wit.ai can then return the appropriate Bot Says message. Does that seems like the appropriate scenario? If so, do you have any pointers on API docs to make that happen? Thanks – Fabio Gomez Jun 18 '16 at 16:55