I have a PySpark dataframe in Azure Databricks. I want to write into Azure Synapse. But i am getting below error.
com.microsoft.sqlserver.jdbc.SQLServerException: The statement failed. Column 'ETL_TableName' has a data type that cannot participate in a columnstore index.
I checked connection for Synapse .All works fine and i am able to read the data. But While writing , i am getting issue . Could anyone please help how to handle this error.
Code For Writing data into Synapse:
dataFrame.repartition(1).write.format("jdbc")\
.option("url", azureurl)\
.option("tempDir", tempDir) \
.option("forwardSparkAzureStorageCredentials", "true") \
.option("dbTable", dbTable)\
.option("append", "true")\
.save()