EDIT: this was a bug in the simulator and the bug has apparently been fixed.
I'm trying to build a Google Assistant action (skill) using Python code and the REST API. According to the docs, you can post a rich response containing one or two simple responses. When I do this, however, the simulator ignores the displayText from the second simple response and displays only the first one. Unlike text, voice is taken from both simple responses. Here is the structure I post:
{
"conversationToken": "",
"expectUserResponse": True,
"expectedInputs": [
{
"inputPrompt": {
"richInitialPrompt": {
"items": [
{
"simpleResponse": {
"textToSpeech": "First speech.",
"displayText": "First text."
}
},
{
"simpleResponse": {
"textToSpeech": "Second speech.",
"displayText": "Second text."
}
},
],
"suggestions": []
}
},
"possibleIntents": [
{
"intent": "actions.intent.TEXT"
}
]
}
]
}