I'd like to deploy an app to Google Assistant. But, I'd like to use a different AI backend instead of api.ai.
Does anyone know if it's even possible? And how?
Or am I stuck with api.ai if I want to work with Google Assistant?
Thanks
I'd like to deploy an app to Google Assistant. But, I'd like to use a different AI backend instead of api.ai.
Does anyone know if it's even possible? And how?
Or am I stuck with api.ai if I want to work with Google Assistant?
Thanks
You can actually use anything in the backend, from a simple string-matching approach to another NLU (wit.ai, luis.ai, Amazon Lex, Rasa, etc.).
However, if you're not using any of the ones supported by Google, you'll have to write the software that bridges between the Google Actions SDK and your other conversation platform.
Like Prisoner said, you'll pretty much have to make your own action package detailed here: https://developers.google.com/actions/sdk/
If you're doing a simplistic string-matching approach, the ActionsSDK can do really basic intent matching and entity recognition by itself without additional processing, but for more complicated things, you'll need a proper NLU.
If you're forwarding the input text to another service, you can simply use the TEXT standard intent (actions.intent.TEXT), grab the raw text and forward it along to your fulfillment server. From there, you can process the text with your NLU, and build a response to send back to the ActionsSDK. You can pretty much ignore everything else about the ActionsSDK.
One last thing: if your backend is already using Node.js, you can take a slight shortcut and build a wrapper around your backend with the Node.js Client Library, otherwise, you'll have to implement the interface from scratch.
Good luck!
In place of API.AI, you can use others like:
I am pretty sure there are others, but these are the ones I can think of right now.
You can use any natural language processing system you want. API.AI provides rather complete support with Actions on Google, but it isn't the only one.
If you want to roll your own (or use one that doesn't directly support Actions yet), you can configure a JSON action package that describes the intents and responses for your action.
I use an open source framework to do the NLU piece. If you'd like to leverage the actions sdk with another NLU solution other than api.ai, be mindful that you have no control over the speech to text Google provides you with. I'm not sure if API.ai allows for custom grammar files, or leverages the developers intents/entities to assist with transcription, but the actions sdk does not.
I think this is a big difference between Alexa and Google Assistant because with Alexa you can provide utterances with expected entities, which i'm guessing ultimately improves the accuracy of speech to text.