I cloned the spark job server from github and ran sbt and sbt job-server-tests/package and re-Start.I got the WordCountExample running. The question I have are: 1. Where does job server look for the base Spark jars to run the job ? Or does it come with its own version. 2. I have Spark 2.0 running on my machine where I ran the job server as well.The github documentation says the supported version of Spark is 1.6.2. Any idea if I can use it with 2.0 (ofcourse at my own risk). Has anyone tried this ?
Asked
Active
Viewed 98 times
1 Answers
0
- Spark is a dependency for spark-jobserver. So if you use job-server/reStart it uses the dependent libraries to run spark.
- As of now spark-jobserver supports Spark 1.6.2. We are yet to start work on 2.0.0
- Someone has a branch for 2.0 changes, see here https://github.com/spark-jobserver/spark-jobserver/issues/570

noorul
- 1,283
- 1
- 8
- 18
-
Thanks for responding. So where does the job-server pick the Spark jar files from ? I mean which location.Can I change that to point to Spark 2.0 and try running the server ? – Rishi S Sep 19 '16 at 13:57
-
Some one mentioend http://github.com/f1yegor/spark-jobserver in https://github.com/spark-jobserver/spark-jobserver/issues/570. You could try that. – noorul Sep 20 '16 at 05:03
-
@RishiS Did the answer help you? – noorul Sep 28 '16 at 03:21