0

Im trying to save a big dataframe into a Databricks table to make the data persistent and avaiable to other notebooks without having to query again the data sources:

df.write.saveAsTable("cl_data")
and
using the overwrite method too

into the notebook output doesn't return any error but in the table preview, this comes out in both cases

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 5359.0 failed 4 times, most recent failure: Lost task 0.3 in stage 5359.0: java.lang.IllegalStateException: Couldn't find *columns name here*#217413

The column exist in the dataframe. I've tried dropping it and it doesn't work anyway. There is any help you can give me? Thanks!

L30h
  • 3
  • 2

1 Answers1

0

I've resolved it! The problem was the presence of columns full of null values. Once i've removed them, everything worked fine!

L30h
  • 3
  • 2