Whenever I am running spark job with below parameters it is getting slow down.
spark-submit --conf spark.sql.shuffle.partitions=100 --master yarn --deploy-mode cluster --conf spark.dynamicAllocation.enabled=true --conf spark.dynamicAllocation.minExecutors=2 --conf spark.dynamicAllocation.maxExecutors=30 --num-executors 5 --executor-cores 5 --executor-memory 17g --conf Spark.Dynamic.executors=true
I have a script which is writing data to 14 tables and this job hardly takes 5 minutes to complete as it is incrementally running. It is getting stuck to one table which is almost taking 3 hr to complete and on someday it is getting complete within a seconds. Below is the dag of the job which is consuming time :