we have installed a spark-jobserver that is launched on a spark cluster using server_start.sh, however no matter how we go about it we cannot make it launch on multiple workers. We can manage to get it to run with several cores and more memory, but not over several nodes.
The commands we have tried are as follows:
./server_start.sh --master spark://IP:PORT --deploy-mode cluster --total-executor cores 6
./server_start.sh --master spark://IP:PORT --deploy-mode cluster --total-executor cores 4 --executor-cores 2
./server_start.sh --master spark://IP:PORT --deploy-mode cluster --conf spark.driver.cores=4 --conf spark.driver.memory=7g
./server_start.sh --master spark://IP:PORT --deploy-mode cluster --conf spark.driver.cores=6 --conf spark.driver.memory=7g
The first two commands launched and showed one worker using one core and 1GB, while the third shows one worker using 4 cores and 7g. The fourth command shows 6 cores to be used, but state SUBMITTED.
We have verified that it does work to launch and application on multiple workers by launching the spark shell with the following command, which shows up as running driver with 2 workers and a total of 6 cores.
./spark-shell --master spark://IP:PORT --total-executor cores 6
Would appreciate any help.