The following curl command works perfectly to call, pass argument and execute my "jobified" spark program
curl 'http://someserver:8090/jobs?appName=secondtest&classPath=Works.epJob&context=hiveContext' -d "inputparms=/somepath1 /somepath2"
Here is…
I cloned the spark job server from github and ran sbt and sbt job-server-tests/package and re-Start.I got the WordCountExample running. The question I have are:
1. Where does job server look for the base Spark jars to run the job ? Or does it come…
I am trying to submit a spark job in spark job server with input in json format. However in my case one of values contains '!' character, which is not allowing me to parse it.Here is my input and response.
Input
curl -d "{"test.input1"="abc",…
I am trying to follow this documentation:
https://github.com/spark-jobserver/spark-jobserver#dependency-jars
Option 2 Listed in the docs says:
The dependent-jar-uris can also be used in job configuration param
when submitting a job. On an ad-hoc…
I am using the Spark JobServer Java Client from this GitHub project:
https://github.com/bluebreezecf/SparkJobServerClient
I am able to upload a Jar containing the Job I want to execute to Spark JobServer. The logs indicate it is stored in…
I'm using Spark Job server docker
docker run -d -p 8090:8090 --name sjs --net= -e SPARK_MASTER=spark://:7077 velvia/spark-jobserver:0.6.2.mesos-0.28.1.spark-1.6.1
While it appears to be working, when I'm submitting a…
I was practising developing sample model using online resources provided in spark website. I managed to create the model and run it for sample data using Spark-Shell , But how to do actually run the model in production environment ? Is it via Spark…
I have setup a 3 Node hadoop cluster with HA for Namenode and ResourceManager.
I have also installed Spark Job Server in one of the NameNode machine.
i have tested running job-server-test examples like WordCount Example and LongPi Job and it works…
I have a Spark standalone cluster running on a few machines. All workers are using 2 cores and 4GB of memory. I can start a job server with ./server_start.sh --master spark://ip:7077 --deploy-mode cluster --conf spark.driver.cores=2 --conf…
I am trying to set up a Spark JobServer (SJS) to execute jobs on a Standalone Spark cluster. I am trying to deploy SJS on one of the non-master nodes of SPARK cluster. I am not using the docker, but trying to do manually.
I am confused with the help…
I am creating a spark job server, which connects to cassandra. After getting the records i want to perform a simple group by and sum on it. I am able to retreive the data, I could not print the output. I have tried google on for hours and have…
I have launched spark cluster in single mode.
start-master.sh -h 10.0.0.56
start-slave.sh spark://10.0.0.56:7077
I can successfully run job using spark-core lib for Scala. I want to use Spark JobServer for jobs managing. I started it in docker on…
I want to write unit tests for spark jobs executed in the spark-jobserver.
This works fine unless I need to access the config e.g. check it for specific input values like:
Try(config.getString("myKey"))
.map(x => SparkJobValid)
…
getting started with spark-jobserver I learnt that data frames can be flattend like Spark flattening out dataframes but this still does not fulfill https://github.com/spark-jobserver/spark-jobserver#job-result-serialization
If this is the result I…
According to the SparkJobServer documentation:
validate allows for an initial validation of the context and any
provided configuration. If the context and configuration are OK to run the job, returning spark.jobserver.SparkJobValid will let the…