1

I am having two hadoop clusters containing 15(big) and 3(small) nodes respectively. Both are managed by cloudera manager. I am running a Spark job using yarn setting --num-executors to 6. The Spark UI of the big cluster is showing the 6 executors, but Spark UI of the small cluster is showing only 3 executors. What are the probable reasons for it? And also how to overcome the issue?

Thanks in advance.

Chandan
  • 764
  • 2
  • 8
  • 21
  • If you dont have enough memory on the cluster for the number of executors you have requested you will get less. How much memory is on the 3 small nodes and how much are you asking for per executor? – Jem Tucker Nov 21 '15 at 16:59
  • @JemTucker 64 GB on three small nodes and I am asking for 30GB per executor. – Chandan Nov 22 '15 at 08:01
  • And what about the number of cores ? – Jem Tucker Nov 22 '15 at 10:28
  • 1
    When you request memory through spark submit an overhead is added, it is probably that with this overhead there is not enough memory for more than one executor – Jem Tucker Nov 22 '15 at 10:38

0 Answers0