0

I am trying to read an excel file via com.crealytics.spark.excel. But I am facing the following error , while trying to run my code:

scala.MatchError: Map(treatemptyvaluesasnulls -> true, location -> a.xlsx, useheader -> true, inferschema -> False, addcolorcolumns -> False) (of class org.apache.spark.sql.catalyst.util.CaseInsensitiveMap)
at com.crealytics.spark.excel.WorkbookReader$.apply(WorkbookReader.scala:30)

Here is my code:

spark.read
      .format("com.crealytics.spark.excel")
      .option("location", fileLoc)
      .option("useHeader", "true")
      .option("treatEmptyValuesAsNulls", "true")
      .option("inferSchema", "False")
      .option("addColorColumns", "False")
      .load()
baitmbarek
  • 2,440
  • 4
  • 18
  • 26
Ayan Biswas
  • 1,641
  • 9
  • 39
  • 66
  • What version are you using? This seems to have been fixed in newer versions, see: https://github.com/crealytics/spark-excel/issues/93. – Shaido Sep 17 '19 at 06:14
  • I am using 0.12.0 version libraryDependencies += "com.crealytics" %% "spark-excel" % "0.12.0" – Ayan Biswas Sep 17 '19 at 06:55

1 Answers1

0

This might work if you install com.crealytics:spark-excel_2.11:0.12.5 library (works as expected in Databricks).

val df_excel= spark.read.
                   format("com.crealytics.spark.excel").
                   option("useHeader", "true").
                   option("treatEmptyValuesAsNulls", "false").
                   option("inferSchema", "false"). 
                   option("addColorColumns", "false").load(file_path)

display(df_excel)