1

I am new to Delalake. I was trying a simple example.

  1. Create dataframe from a csv
  2. Save it is as delta table
  3. Read it again.

It works fine. I can see the files are created in the default spark-warehouse folder.

But Next time I just want to read the saved table. So I comment code for the first two septs and re-run the program I get

Analysis Exception:Table or view not found


 val transHistory = spark.
      read
      .option("header", "true")
      .option("inferschema", true)
      .csv(InputPath + "trainHistory.csv");

    transHistory.write.format("delta").mode(SaveMode.Overwrite).saveAsTable("transactionshistory")

    val transHistoryTable = spark.read.format("delta").table("transactionshistory")
    transHistoryTable.show(10)

I am using delta lake 0.8.0, Spark 3.0, and scala 2.12.13

Alex Ott
  • 80,552
  • 8
  • 87
  • 132
Rajan
  • 31
  • 4
  • The format you will read is now delta not CSV. Try this val transHistory = spark.read.format("delta").option("header", "true").option("inferschema", true).load(InputPath + "trainHistory.csv") – Partha Deb May 10 '21 at 08:24
  • What I am trying to do is reading a csv with regular spark api and saving it as delta table (transactionshistory) and read it back. Shouldn't it work? BTW I tried your suggesion, it gives AnalysisException: `/RetailData/trainHistory.csv` is not a Delta table. – Rajan May 10 '21 at 15:05
  • Yes, you need to change the csv filename with the delta table name. transactionshistory is your delta table name. You have to provide the full path while reading the delta table. – Partha Deb May 12 '21 at 12:10
  • Why? When it stores the table in the warehouse it anyway chooses it own names. I can read the files with file path anyway. I want read it as a table. – Rajan May 17 '21 at 05:10

0 Answers0