8

I have read an avro file into spark RDD and need to conver that into a sql dataframe. how do I do that.

This is what I did so far.

import org.apache.avro.generic.GenericRecord
import org.apache.avro.mapred.{AvroInputFormat, AvroWrapper}
import org.apache.hadoop.io.NullWritable

val path = "hdfs://dds-nameservice/user/ghagh/"
val avroRDD = sc.hadoopFile[AvroWrapper[GenericRecord], NullWritable, AvroInputFormat[GenericRecord]](path)

When I do:

avro.take(1)

I get back

res1: Array[(org.apache.avro.mapred.AvroWrapper[org.apache.avro.generic.GenericRecord], org.apache.hadoop.io.NullWritable)] = Array(({"column1": "value1", "column2": "value2", "column3": value3,...

How do I convert this to a SparkSQL dataframe?

I am using Spark 1.6

Can anyone tell me if there is an easy solution around this?

Gayatri
  • 2,197
  • 4
  • 23
  • 35

2 Answers2

15

For DataFrame I'd go with Avro data source directly:

  • Include spark-avro in packages list. For the latest version use:

    com.databricks:spark-avro_2.11:3.2.0
    
  • Load the file:

    val df = spark.read
      .format("com.databricks.spark.avro")
      .load(path)
    
Alper t. Turker
  • 34,230
  • 9
  • 83
  • 115
  • 2
    `--packages org.apache.spark:spark-avro_2.11:2.4.4` works too where as `--packages org.apache.spark:spark-avro_2.12:2.4.4` doesn't. Details are in the [issue](https://issues.apache.org/jira/browse/SPARK-27623) – Devi Nov 19 '19 at 05:18
0

If your project is maven then add below latest dependency in pom.xml

<dependency>
   <groupId>com.databricks</groupId>
   <artifactId>spark-avro_2.11</artifactId>
   <version>4.0.0</version>
</dependency>

After that you can read avro file like below

val df=spark.read.format("com.databricks.spark.avro").option("header","true").load("C:\\Users\\alice\\inputs\\sample_data.avro")
Manoj Kumar Dhakad
  • 1,862
  • 1
  • 12
  • 26