2

I'm trying to run a pySpark job with custom inputs, for testing purposes. The job has three sets of input, each read from a table in a different metastore database.

The data is read in spark with: hiveContext.table('myDb.myTable')

The test inputs are three files. In an attempt to not change any of the original code, I read all three inputs into DataFrames, and attempt to register a temp table with myDF.registerTempTable('myDb.myTable').

The problem is that spark fails with org.apache.spark.sql.catalyst.analysis.NoSuchTableException.

I've also tried:

hiveContext.sql('create database if not exists myDb')
hiveContext.sql('use myDb')
myDF.registerTempTable('myTable')

But that fails as well.

Any idea why the table cannot be found?

Using Spark 1.6

summerbulb
  • 5,709
  • 8
  • 37
  • 83

0 Answers0