17

Getting Below exception , when i tried to perform unit tests for my spark streaming code on SBT windows using scalatest.

sbt testOnly <<ClassName>>

*
*
*
*
*
*

2018-06-18 02:39:00 ERROR Executor:91 - Exception in task 1.0 in stage 3.0 (TID 11) java.lang.NoSuchMethodError: net.jpountz.lz4.LZ4BlockInputStream.(Ljava/io/InputStream;Z)V at org.apache.spark.io.LZ4CompressionCodec.compressedInputStream(CompressionCodec.scala:122) at org.apache.spark.serializer.SerializerManager.wrapForCompression(SerializerManager.scala:163) at org.apache.spark.serializer.SerializerManager.wrapStream(SerializerManager.scala:124) at org.apache.spark.shuffle.BlockStoreShuffleReader$$anonfun$2.apply(BlockStoreShuffleReader.scala:50) at org.apache.spark.shuffle.BlockStoreShuffleReader$$anonfun$2.apply(BlockStoreShuffleReader.scala:50) at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:417) at org.apache.spark.storage.ShuffleBlockFetcherIterator.next(ShuffleBlockFetcherIterator.scala:61) at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435) at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:32) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.sort_addToSorter$(Unknown Source) at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10$$anon$1.hasNext(WholeStageCodegenExec.scala:614) at org.apache.spark.sql.execution.GroupedIterator$.apply(GroupedIterator.scala:29) at org.apache.spark.sql.execution.streaming.FlatMapGroupsWithStateExec$StateStoreUpdater.updateStateForKeysWithData(FlatMapGroupsWithStateExec.scala:176)**

Tried couple of things to exclude net.jpountz.lz4 jar( with suggestions from other posts) but again same error in output.

Currently using spark 2.3 , scalatest 3.0.5, Scala 2.11 version . i see this issue only after upgrade to spark 2.3 and scalatest 3.0.5

Any suggestions ?

marios
  • 8,874
  • 3
  • 38
  • 62
KK2486
  • 353
  • 2
  • 3
  • 13
  • First suggestion: please edit the title and the formatting of your question to make it more readable. Afterwards, you should probably share some lines of the code you've used – Nico Haase Jun 18 '18 at 11:41
  • Can you post your build file? – soote Jun 27 '18 at 22:47
  • I was getting same error while running job which has parquet output added following property it worked fine, --conf spark.io.compression.codec=snappy – Akash Tantri Jan 06 '21 at 11:11

2 Answers2

33

Kafka has a conflicting dependency with Spark and that's what caused this issue for me.

This is how you can exclude the dependency in you sbt file

lazy val excludeJpountz = ExclusionRule(organization = "net.jpountz.lz4", name = "lz4")

lazy val kafkaClients = "org.apache.kafka" % "kafka-clients" % userKafkaVersionHere excludeAll(excludeJpountz) // add more exclusions here

When you use this kafkaClients dependency it would now exclude the problematic lz4 library.


Update: This appears to be an issue with Kafka 0.11.x.x and earlier version. As of 1.x.x Kafka seems to have moved away from using the problematic net.jpountz.lz4 library. Therefore, using latest Kafka (1.x) with latest Spark (2.3.x) should not have this issue.

marios
  • 8,874
  • 3
  • 38
  • 62
  • 4
    This solved my issue after upgrading to spark 2.3. I had to exclude it on `org.apache.kafka:kafka` and `org.apache.spark:spark-streaming-kafka-0-10`. – soote Jun 27 '18 at 22:54
  • Thanks for the update soote, that may very well also have the same dependency. – marios Jun 27 '18 at 23:51
  • 4
    Still not solved for me, Here are changes i did to my build.sbt name := "ABC123" scalaVersion := "2.11.11" val sparkVersion = "2.3.0" ** lazy val excludeJpountz = ExclusionRule(organization = "net.jpountz.lz4", name = "lz4") ** libraryDependencies ++= Seq( . . "org.apache.spark" %% "spark-streaming-kafka-0-10" % sparkVersion excludeAll(excludeJpountz), ** "org.apache.spark" %% "spark-sql-kafka-0-10" % sparkVersion excludeAll(excludeJpountz),** ) – KK2486 Jun 28 '18 at 15:57
  • 1
    The right way to solve this for complex builds is to install `https://github.com/jrudolph/sbt-dependency-graph` and the use `whatDependsOn net.jpountz.lz4 lz4 1.3.0` it will give you a graph of all the libraries you have that depend on this. You will need to add the same exclude rule to all of those. – marios Jun 28 '18 at 18:13
  • Thanks alot for the tip, There was another project where i need to add this exclusion. After that everything worked. – KK2486 Jun 29 '18 at 10:08
  • Upgrading my Kafka version to 1.1.0 resolved this issue. – Sudheer Palyam Sep 02 '18 at 09:20
  • 1
    just to clarify something that initially confused us: `net.jpountz.lz4:lz4` has been superseded by the artefact `org.lz4:lz4-java` but they kept the same package names. So you can end up with both artefacts in your packaging and if you're unlucky the ClassLoader finds the old one without the new method signature. – jrg Apr 09 '19 at 11:27
3

This artifact "net.jpountz.lz4:lz4" was moved to: "org.lz4 » lz4-java"

By using; libraryDependencies += "org.lz4" % "lz4-java" % "1.7.1", the issue has been resolved.