I have looked at the answer to
Why is Spark detecting 8 cores, when I only have 4?
And it doesn't seem to explain the following scenario: I am setting the spark.executor.cores
at 5. I have spark.dynamicAllocation.enabled
set to true. According to the Spark History Server, my 10 node cluster is running 30 executors, indicating that spark is using 3 executors per node. This seems to suggest that 15 cores are available (3 executors x 5 cores) per node. The specs for an m4.xlarge instance are 4 vCPUs with 16 GB of memory. Where are these extra cores coming from?
Note: I am setting spark.executor.memory
at 3g
and yarn.nodemanager.resource.memory-mb
at 12200.