I have a question/problem regarding dynamic resource allocation. I am using spark 1.6.2 with stand alone cluster manager.
I have one worker with 2 cores. I set the the folllowing arguments in the spark-defaults.conf file on all my nodes:
spark.dynamicAllocation.enabled true
spark.shuffle.service.enabled true
spark.deploy.defaultCores 1
I run a sample application with many tasks. I open port 4040 on the driver and i can verify that the above configuration exists.
My problem is that no matter what i do my application only gets 1 core even though the other core is available.
Is this normal or do i have a problem in my configuration?
The behaviour i want to get is this: I have many users working with the same spark cluster. I want that each application will get a fixed number of cores unless the rest of the clutser is pending. In this case I want that the running applications will get the total amount of cores until a new application arrives...
Do I have to go to mesos for this?