15

I am interested in being able to retrieve the location value of a Hive table given a Spark object (SparkSession). One way to obtain this value is by parsing the output of the location via the following SQL query:

describe formatted <table name>

I was wondering if there is another way to obtain the location value without having to parse the output. An API would be great in case the output of the above command changes between Hive versions. If an external dependency is needed, which would it be? Is there some sample spark code that can obtain the location value?

code
  • 5,294
  • 16
  • 62
  • 113

6 Answers6

6

Here is the correct answer:

import org.apache.spark.sql.catalyst.TableIdentifier

lazy val tblMetadata = spark.sessionState.catalog.getTableMetadata(new TableIdentifier(tableName,Some(schema)))
RaLDeN
  • 3
  • 1
Guillaume
  • 1,277
  • 2
  • 13
  • 21
  • 1
    @GuilhermedeLazari here it is `spark._jsparkSession.sessionState().catalog().getTableMetadata(spark.sparkContext._jvm.org.apache.spark.sql.catalyst.TableIdentifier('table', spark.sparkContext._jvm.scala.Some('database'))).storage().locationUri().get()` – pltc Oct 21 '19 at 22:29
  • I prefer this over submitting a new spark job (describe table). – Ravindra Jun 01 '22 at 10:46
6

You can also use .toDF method on desc formatted table then filter from dataframe.

DataframeAPI:

scala> :paste
spark.sql("desc formatted data_db.part_table")
.toDF //convert to dataframe will have 3 columns col_name,data_type,comment
.filter('col_name === "Location") //filter on colname
.collect()(0)(1)
.toString

Result:

String = hdfs://nn:8020/location/part_table

(or)

RDD Api:

scala> :paste
spark.sql("desc formatted data_db.part_table")
.collect()
.filter(r => r(0).equals("Location")) //filter on r(0) value
.map(r => r(1)) //get only the location
.mkString //convert as string
.split("8020")(1) //change the split based on your namenode port..etc

Result:

String = /location/part_table
notNull
  • 30,258
  • 4
  • 35
  • 50
5

Here is how to do it in PySpark:

 (spark.sql("desc formatted mydb.myschema")
       .filter("col_name=='Location'")
       .collect()[0].data_type)   
Joe J
  • 9,985
  • 16
  • 68
  • 100
3

First approach

You can use input_file_name with dataframe.

it will give you absolute file-path for a part file.

spark.read.table("zen.intent_master").select(input_file_name).take(1)

And then extract table path from it.

Second approach

Its more of hack you can say.

package org.apache.spark.sql.hive

import java.net.URI

import org.apache.spark.sql.catalyst.catalog.{InMemoryCatalog, SessionCatalog}
import org.apache.spark.sql.catalyst.parser.ParserInterface
import org.apache.spark.sql.internal.{SessionState, SharedState}
import org.apache.spark.sql.SparkSession

class TableDetail {
  def getTableLocation(table: String, spark: SparkSession): URI = {
    val sessionState: SessionState = spark.sessionState
    val sharedState: SharedState = spark.sharedState
    val catalog: SessionCatalog = sessionState.catalog
    val sqlParser: ParserInterface = sessionState.sqlParser
    val client = sharedState.externalCatalog match {
      case catalog: HiveExternalCatalog => catalog.client
      case _: InMemoryCatalog => throw new IllegalArgumentException("In Memory catalog doesn't " +
        "support hive client API")
    }

    val idtfr = sqlParser.parseTableIdentifier(table)

    require(catalog.tableExists(idtfr), new IllegalArgumentException(idtfr + " done not exists"))
    val rawTable = client.getTable(idtfr.database.getOrElse("default"), idtfr.table)
    rawTable.location
  }
}
Kaushal
  • 3,237
  • 3
  • 29
  • 48
1

USE ExternalCatalog

scala> spark
res15: org.apache.spark.sql.SparkSession = org.apache.spark.sql.SparkSession@4eba6e1f

scala> val metastore = spark.sharedState.externalCatalog
metastore: org.apache.spark.sql.catalyst.catalog.ExternalCatalog = org.apache.spark.sql.hive.HiveExternalCatalog@24b05292

scala> val location = metastore.getTable("meta_data", "mock").location
location: java.net.URI = hdfs://10.1.5.9:4007/usr/hive/warehouse/meta_data.db/mock
macduan
  • 11
  • 1
0

Use this as re-usable function in your scala project

  def getHiveTablePath(tableName: String, spark: SparkSession):String =
    {
       import org.apache.spark.sql.functions._
      val sql: String = String.format("desc formatted %s", tableName)
      val result: DataFrame = spark.sql(sql).filter(col("col_name") === "Location")
      result.show(false) // just for debug purpose
      val info: String = result.collect().mkString(",")
      val path: String = info.split(',')(1)
      path
    }

caller would be

    println(getHiveTablePath("src", spark)) // you can prefix schema if you have

Result (I executed in local so file:/ below if its hdfs hdfs:// will come):

+--------+------------------------------------+-------+
|col_name|data_type                           |comment|
+--------+--------------------------------------------+
|Location|file:/Users/hive/spark-warehouse/src|       |
+--------+------------------------------------+-------+

file:/Users/hive/spark-warehouse/src

Ram Ghadiyaram
  • 28,239
  • 13
  • 95
  • 121