I have a parquet table for which I get an error:
FileReadException: Error while reading file dbfs:/mnt/gold/catalog.parquet/part-00120-tid-1146522170304013652-7e167102-3a27-46d7-b674-901496f37d84-353-1-c000.snappy.parquet.
Parquet column cannot be converted. Column: [CreateDate], **Expected: StringType, Found: INT32**
I can read table using pyspark:
df_catalog = spark.read.option("mergeSchema", "true").parquet(catalog_path)
I would like to enable users to access tables using simply Spark SQL. Is it possible to create table with this option?