I have few questions around Mesos-spark:
- When I submit spark job with different spark context on Mesos, does it invoke different mesos-spark framework instance or use the same.
- How can I ensure that each time different spark framework is created.
- Can I specify constraints to reserve/pre-allocate the mesos-slave for a specific spark context or framework instance. I understand that it defeats the purpose of Mesos a bit and Mesos can guarantee the memory and CPU in coarse grained mode. But for some reason, I don't want to share the physical machines that run the tasks(slaves) across different spark jobs (meant for different users)