4

I have LZ4 compressed data in HDFS and I'm trying to decompress it in Apache Spark into a RDD. As far as I can tell, the only method in JavaSparkContext to read data from HDFS is textFile which only reads data as it is in HDFS. I have come across articles on CompressionCodec but all of them explain how to compress output to HDFS whereas I need to decompress what is already on HDFS.

I am new to Spark so I apologize in advance if I missed something obvious or if my conceptual understanding is incorrect but it would be great if someone could point me in the right direction.

shoopdelang
  • 985
  • 2
  • 9
  • 20
  • I believe you want to look into the docs and examples for `SparkContext.newAPIHadoopFile()`. – Nick Chammas Jul 28 '14 at 04:44
  • I'm 80% sure `textFile` performs decompression on gzipped data. Did you try it? Does it not decompress your files transparently? – Daniel Darabos Jul 28 '14 at 21:30
  • I have tried `textFile` and no it does not decompress the data. – shoopdelang Jul 29 '14 at 06:02
  • 3
    @Daniel - `textFile()` does indeed decompress gzipped data (I've used it many times like that), but not data compressed with LZ4. For that, you'll need `newAPIHadoopFile()`. – Nick Chammas Aug 05 '14 at 15:15
  • gzip is not an option on huge file because decompression cannot be parallelized while bz2 (but to slow) and lz4 are. – Kiwy Nov 14 '18 at 15:50

1 Answers1

1

Spark 1.1.0 supports reading LZ4 compressed files via sc.textFile. I've got it working by using Spark that is built with Hadoop that supports LZ4 (2.4.1 in my case)

After that, I've built native libraries for my platform as described in Hadoop docs and linked them them to Spark via --driver-library-path option.

Without linking there were native lz4 library not loaded exceptions.

Depending on Hadoop distribution you are using building native libraries step may be optional.

Vasyl Shchukin
  • 596
  • 5
  • 4