Is there a way to display the dialog inputs(human inputs) and outputs(robot answers) on the tablet of Pepper? I have seen an example of it on https://softbankroboticstraining.github.io/pepper-chatbot-api/#pepper-chat, but it doesn't work directly in QiChat syntax.
I have also seen some examples in ALTabletService
documentation for images but not for interactive dialog. The motivation behind it is to have a multi-modal interaction instead of just audio based. Note: Python implementations would be preferable than with Choreographe.