I am trying to upload a tab delimited text file in databricks notebooks, but all the column values are getting pushed into one column value
here is the sql code I am using
Create table if not exists database.table
using text
options (path 's3bucketpath.txt', header "true")
I also tried using csv
The same things happens if i'm reading into a spark dataframe
I am expecting to see the columns separated out with their header. Has anyone come across this issue and figured out a solution?