I'm facing a somehow weird problem with Spark, Google guava, and SBT.
I'm writing a Spark 1.5.2 app that uses a component from the last version of Google guava.
In my build.sbt
I thus specified the following dependencies:
libraryDependencies += "org.apache.hadoop" % "hadoop-client" % "2.3.0" % "provided"
libraryDependencies += "org.apache.spark" % "spark-core_2.10" % "1.5.2" % "provided"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "1.5.2" % "provided"
libraryDependencies += "com.google.guava" % "guava" % "19.0"
I then run sbt assembly
and spark-submit
.
The problem is that Spark 1.5.2 already provides a previous version of guava in which a couple of methods I'm using behave differently, or are not defined.
As a result, when I run my app the previous version of guava is used and I don't get the results I was expecting.
Does anybody know if there is a way to specify that I don't care which version of guava Spark is using, but I do want my code to run the version I specified in build.sbt
?
Thanks for any help.