0

I'm facing a somehow weird problem with Spark, Google guava, and SBT.

I'm writing a Spark 1.5.2 app that uses a component from the last version of Google guava. In my build.sbt I thus specified the following dependencies:

libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.3.0" % "provided"

libraryDependencies += "org.apache.spark" % "spark-core_2.10" % "1.5.2" % "provided"

libraryDependencies += "org.apache.spark" %% "spark-sql" % "1.5.2" % "provided"

libraryDependencies += "com.google.guava" % "guava" % "19.0"

I then run sbt assembly and spark-submit. The problem is that Spark 1.5.2 already provides a previous version of guava in which a couple of methods I'm using behave differently, or are not defined. As a result, when I run my app the previous version of guava is used and I don't get the results I was expecting.

Does anybody know if there is a way to specify that I don't care which version of guava Spark is using, but I do want my code to run the version I specified in build.sbt?

Thanks for any help.

Alberto
  • 597
  • 3
  • 17
  • You can't "not care" about which version Spark uses if you want to use the same library - eventually Spark and your code must share a classpath, which means they can't use two different versions of the same class. See duplicate question for ways to work around this (use the same version Spark does, or shade your preferred Guava version) – Tzach Zohar Mar 04 '16 at 11:04
  • Hello @TzachZohar, thanks for pointing me to the question. – Alberto Mar 04 '16 at 11:35

0 Answers0