I am working on spark 1.3.0 . My build.sbt looks as follows:
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "1.3.0" % "provided",
"org.apache.spark" %% "spark-sql" % "1.3.0" % "provided",
"org.apache.spark" %% "spark-streaming" % "1.3.0" % "provided",
"org.apache.spark" %% "spark-mllib" % "1.3.0" % "provided",
"org.springframework.security" % "spring-security-web" % "3.0.7.RELEASE",
"com.databricks" % "spark-csv_2.10" % "1.4.0"
)
// META-INF discarding
mergeStrategy in assembly <<= (mergeStrategy in assembly) { (old) =>
{
case PathList("META-INF", xs @ _*) => MergeStrategy.discard
case x => MergeStrategy.first
}
}
With this sbt file hadoop 2.2.0 is being used during compilation. But my run environment contains hadoop 2.6.0 . Can anyone help how can i exclude hadoop dependency from spark library and mention hadoop 2.6.0 in sbt file?
Thanks