I recently implemented a solution capable of joining multiple sets of streaming data, and I faced the same issue you said in your question.
Indeed, a KDA In-application takes only one stream as input data source; so this limitation makes the schema standardization of the data flowing into KDA necessary when you are dealing with multiple sets of streams. To work around these issues, a python snippet code can be used inside of lambda to flatten and standardize any event by converting its entire payload to a JSON-encoded string. Then, this lambda send the flattened events to a Kinesis Data Stream. The image below illustrates this process:

Note that after this stage both JSON events have the same schema and no nested fields. Yet, all information is preserved. In addition, the ssn field is placed on the header to be used as join key later on.
I wrote a detailed explanation of this solution here:
https://medium.com/@guilhermeepassos/joining-and-enriching-multiple-sets-of-streaming-data-with-kinesis-data-analytics-24b4088b5846
I hope this may help!!!