We want the chat-bot to behave as human in taking questions and to response with the proper answers. The normal conversations from human made with many "utterance" which means there could be some nonsense, emotion and pauses while in the dialog. When people talk with chat-bot he/she could input(submit) words/half-sentence a few more times before expecting the Watson responding back since naturally he/she would like to continue the inputs until a completed and meaningful question can understand from Watson; he/she could pause multiple times before to close a question as well... When Watson dialog/conversation being trained, what could be best practices in design to address such cases?
1 Answers
There is no single easy answer for this. There is a whole field of research on how best to approach this called Conversation Analysis (CA).
The normal conversations from human made with many "utterance" which means there could be some nonsense, emotion and pauses while in the dialog.
Some approaches to this:
Review real world conversations with actual customers. Assuming this is augmenting a live conversation. Normally the employees will have optimised the conversation flow. You can see where people may go off script as well, and at what points.
For nonsense / off topic you acknowledge the subject not the question of what they are saying (if possible). But you also push them back to the actual current part of the conversation. If they continue, you should after a point stop trying to accommodate them.
Detecting emotion can be critical in how you shape later messages to the end user. You can use the Tone Analyser to capture this.
Pauses can happen. If it is within a conversational flow, then you can have you application respond to see if they are still there, or offer suggestions they can ask to progress. Otherwise just assume the person is not there and wait for them.
Beyond that, don't assume that people will do anything in a conversation. Once you have your Conversation application built, put it in front of end users and see how they react.
Make sure it is actual end users of the system. Also try not to bias the test. Tell them what the Conversation helps with and only that. Do not tell them they can enter anything, or they will.
Once you have end users testing, review their conversations. You will find they react slightly different to the real world conversations, so you shape your flow to them. Only work with common patterns, and not one off issues.

- 9,259
- 3
- 26
- 54
-
1Thank you so much to provide the suggestions and insights. It looks like more of an art than science for making a nice chat-bot from the given Watson services at current stage. Beside of training and using the selected Watson services, the chat-bot application itself have to make other kind of decisions in its own integration. I will looking forward to hearing more solution experiences if anyone will have. – nyker Jul 21 '16 at 13:35