I'm following the Databricks Cloud tutorial. I see sample data located at DBFS:
/databricks-datasets/structured-streaming/events
dbfs:/databricks-datasets/structured-streaming/events/file-0.json
dbfs:/databricks-datasets/structured-streaming/events/file-1.json
It looks like these JSON files have the same schema. What happens using spark.readStream.load(path)
if above files have different schema? Should they typically have the same schema?