I'm wondering how to go about submitting spark "jobs" to a single application (allowing for sharing of RDD work but also code/execution independence of modules). I've seen this spark-jobserver formally at ooyala, but I noticed it doesn't yet support…
Goal is to create the following on a local instance of Spark JobServer:
object foo extends SparkJob with NamedRddSupport
Question: How can I fix the following error which happens on every job:
{
"status": "ERROR",
"result": {
"message":…
I have a production environment that consists of several (persistent and ad-hoc) EMR Spark clusters.
I would like to use one instance of spark-jobserver to manage the job JARs for this environment in general, and be able to specify the intended…
I am trying to run Spark Job server on a multi node server.
I have set master="yarn-client" on the namenode
When I run server_start.sh
I get an error
Error: Exception thrown by the agent : java.lang.NullPointerException
The error is not coming due…
Can somebody tell me how I can persist namedObjects on a SparkServer Context? I know there is a possibility for this but I haven't found the solution yet.
Thanks a lot in advance!
I am trying to setup spark job server on my YARN
I am doing
./bin/server_package.sh ec2
This make a folder in /tmp/jobserver
When i try to ./sever_start.sh
I receive this error
Uncaught error from thread [JobServer-akka.actor.default-dispatcher-2]…
I have:
- Hadoop
- Spark JobServer
- SQL Database
I have created a file to access my SQL Database from a local instance of the Spark JobServer. In order to do this, I first have to load my JDBC-driver with this command:…
I create a spark job with IntelliJ , and i want it be loaded and run by spark Job-Server. For this i followed the steps in this link : http://github.com/ooyala/spark-jobserver
And the version of my spark is 1.4.0.
This is the scala code in my…
I run a virtual machine with a local instance of Hadoop and of Spark-JobServer on it. I created a file named 'test.txt' on HDFS that I want to open from the Spark-JobServer. I wrote the following code to do this:
val test1 =…
I am trying to run ./server_start.sh with spark-jobserver,
but it says
"Missing /home/spark/spark-jobserver1.5.1/bin/settings.sh, exiting",
I also check the details in ./server_start.sh from github where i found this(as the picture below):enter…
I am running a Spark job with the Spark Job Server in which I pass job parameters with an HTTP post (much like the word count example here: https://github.com/spark-jobserver/spark-jobserver).
At the moment I can successfully pass these parameters…
when I run curl -d "" 'localhost:8090/contexts/test-context?num-cpu-cores=4&memory-per-node=512m' it makes sparkContext with no problem but when I want to make a sparkSQL context I get an error I used this line to make it curl -d ""…
So I'm trying to run job that simply runs a query against cassandra using spark-sql, the job is submitted fine and the job starts fine. This code works when it is not being run through spark jobserver (when simply using spark submit). Could someone…
I have a problem with job-server-0.5.0 after upgraded DSE 4.6 to 4.7. If I run server_start.sh I'll get error
"Failed to find Spark assembly in /usr/share/dse/spark/assembly/target/scala-2.10
You need to build Spark before running this program."
I…
Just started experimenting with the JobServer and would like to use it in our production environment.
We usually run spark jobs individually in yarn-client mode and would like to shift towards the paradigm offered by the Ooyala Spark JobServer.
I am…