Im using Spark 2.1.1 Standalone cluster,
Although I have 29 free cores in my cluster (Cores in use: 80 Total, 51 Used
), when submitting new spark job with --total-executor-cores 16
this config is not taking affect and the job submitted only with 6 cores..
What am I missing? (deleting checkpoints doesn't help)
Here is my spark-submit command:
PYSPARK_PYTHON="/usr/bin/python3.4"
PYSPARK_DRIVER_PYTHON="/usr/bin/python3.4" \
/opt/spark/spark-2.1.1-bin-hadoop2.7/bin/spark-submit \
--master spark://XXXX.XXXX:7077 \
--conf "spark.sql.shuffle.partitions=2001" \
--conf "spark.port.maxRetries=200" \
--conf "spark.executorEnv.PYTHONHASHSEED=0" \
--executor-memory 24G \
--total-executor-cores 16 \
--driver-memory 8G \
/home/XXXX/XXXX.py \
--spark_master "spark://XXXX.XXXX:7077" \
--topic "XXXX" \
--broker_list "XXXX" \
--hdfs_prefix "hdfs://XXXX"