I have a SQL view stored in Databricks as a table and all of the columns are capitalised. When I load the table in a Databricks job using spark.table(<<table_name>>), all of the columns are converted to lowercase which causes my code to crash. However, when I load the table the same way in a simple notebook, the column names remain capitalised and are NOT turned to lowercase.
Has anyone encountered this issue before? It is strange because it is only happening in the job.