I'm using Apache Spark 2.1.1 and Spark JobServer Spark 2.0 Preview.
I'm seeing on the spark UI Environment tab that there is a config property spark.akka.threads = 12 but in the documentation of Spark 2.1.1 Configuration this parameter doesn't…
We are working on Qubole with Spark version 2.0.2.
We have a multi-step process in which all the intermediate steps write their output to HDFS and later this output is used in the reporting layer.
As per our use case, we want to avoid writing to…
Every-time invoke JobServer runJob API below, I have a time-cost logic to construct an object repeatedly inside the runJob call
runJob(sc: SparkContext, config: Config)
What is the best practice to store the object in memory to avoid repeat…
I am trying to configure spark job-sever for Mesos cluster deployment mode.
I have set spark.master = "mesos://mesos-master:5050" in jobserver config.
When I am trying to create a context on job-server, it is failing with the following…
I have added a new code file along with the existing example files in the spark-jobserver directory and created a .jar. It throws the following error even after the jar file is uploaded…
I have deployed spark job server (version 0.6.2) on a remote machine running a YARN cluster using Cloudera (version 5.8.2). Followed the instructions given here. After deploying, when I tried to start the server, I got the following…
I'm trying to execute locally a job in the spark-jobserver. My application has the dependencies below:
name := "spark-test"
version := "1.0"
scalaVersion := "2.10.6"
resolvers += Resolver.jcenterRepo
libraryDependencies += "org.apache.spark" %%…
I want to use the Machine Learning capabilities of Apache Spark through a RESTful API. Therefore I use the Spark Job Server. I already developed an interface for the communication but figured out that, while I am using the Persistent Context Mode, I…
I want to run different jobs on demand with same spark context, but i don't know how exactly i can do this.
I try to get current context, but seems it create a new spark context(with new executors).
I call spark-submit to add new jobs.
I run code on…
I am making rest requests to query Spark Job Server to get the status of the job. The code looks like below :
private Future getJobResultFuture(String jobId) {
ExecutorService executorService =…
I've set up a spark job-server (see https://github.com/spark-jobserver/spark-jobserver/tree/jobserver-0.6.2-spark-1.6.1) in standalone mode.
I've created a default context to use. Currently I have 2 kind of jobs on this context:
Synchronization…
I face following error occasionally while submitting job. This error goes away if I remove the rootdir of filedao, datadao and sqldao. That means I have to restart the job-server and re-upload my jar.
{
"status": "ERROR",
"result": {
…
unresolved dependency: com.ning#async-http-client;1.8.10: org.sonatype.oss#oss-parent;9!oss-parent.pom(pom.original) origin location must be absolute: file:
at sbt.IvyActions$.sbt$IvyActions$$resolve(IvyActions.scala:313)
at…
Im trying to execute the following curl command to run a job:
curl -k --basic --user 'user:psw' -d 'input.string= {"user":13}' 'https://localhost:8090/jobs?appName=test&classPath=test.ImportCSVFiles&context=import&sync=true'
But I get the following…