0

I am facing this error while running Spark in Databricks. I am trying to read Hudi file format.

I’m using Hudi 0.13.0 with Databricks (12.2 LTS (includes Apache Spark 3.3.2, Scala 2.12) Trying to load a hudi data set from S3 but failed with this error

NoSuchMethodError:org.apache.spark.sql.execution.datasources.FileStatusCache.putLeafFiles(Lorg/apache/hadoop/fs/Path;[Lorg/apache/hadoop/fs/FileStatus;)V

Databricks Configuration

`12.2 LTS

Spark = 3.3.2

Scala = 2.12

Spark Config in Databricks

spark.serializer org.apache.spark.serializer.KryoSerializer spark.sql.catalog.spark_catalog org.apache.spark.sql.hudi.catalog.HoodieCatalog spark.sql.extensions org.apache.spark.sql.hudi.HoodieSparkSessionExtension`

Pom.xml

    <properties>
        <scala.version>2.12.15</scala.version>
        <spark.version>3.3.2</spark.version>
        <hadoop.version>3.3.2</hadoop.version>
        <java.version>1.8</java.version>
    </properties>
    <dependencies>
        <dependency>
         <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_2.12</artifactId>
        <version>${spark.version}</version>
        <scope>provided</scope>
        </dependency>

        <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.12</artifactId>
        <version>${spark.version}</version>
        <scope>provided</scope>
        </dependency>

        <dependency>
        <groupId>org.apache.hudi</groupId>
        <artifactId>hudi-spark3.3-bundle_2.12</artifactId>
        <version>0.13.0</version>
        </dependency>

        <dependency>
        <groupId>org.apache.hadoop</groupId>
        <artifactId>hadoop-aws</artifactId>
        <version>3.2.0</version>
        </dependency>
    <dependencies>

App.scala

    val spark = SparkSession.builder.appName("App")
      .config("spark.sql.sources.partitionOverwriteMode", "dynamic")
      .config("spark.hadoop.fs.s3.impl", "org.apache.hadoop.fs.s3.S3FileSystem")
      .config("spark.hadoop.fs.s3.aws.credentials.provider", "org.apache.hadoop.fs.s3.SimpleAWSCredentialsProvider")
      .config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
      .getOrCreate
    val df = spark.read.format("Hudi").load("path")

How can i fix this issue ? Thanks in Advance.

Should be able to read Hudi format files in Databricks Spark.

dude
  • 1
  • 3

0 Answers0