I've got data in Avro format partitioned by date and time and I receiving new data every hour. Newer partitions can contain more columns then older ones. When I read it by Spark 2.4.3 I got DataFrame with schema of the first(oldest) partition and all newer added columns are lost. What should I do to read all columns? Is there some workaround?
Thanks.