0

I have cluster of 3 Mesos slaves, where I have two applications: “redis” and “memcached”. Where redis depends on memcached and the requirement is both of the applications/services should start on same node instead of different slave nodes.

So I have created the application group and added the dependency properly in the JSON file. After launching the JSON file via “v2/groups” REST API, I observe that sometime both application group will start on same node but sometimes it will start on different slaves which breaks our requirement.

So intent/requirement is; if any application fails to start on a slave both the application should failover to other slave node. Also can I configure the JSON file to tell Marathon to start the application group on slave-1 (specific slave first) if it is available else start it on other slave in a cluster. Due to some reason if this application group will start on other slave can Marathon relaunch the application group to slave-1 if it is available to serve the request.

Thanks in advance for help.

2 Answers2

1

Edit/Update (2): Mesos, Marathon, and DC/OS support for PODs is available now: DC/OS: https://dcos.io/docs/1.9/usage/pods/using-pods/ Mesos: https://github.com/apache/mesos/blob/master/docs/nested-container-and-task-group.md Marathon: https://github.com/mesosphere/marathon/blob/master/docs/docs/pods.md


I assume you are talking about marathon apps.

Marathon application groups don't have any semantics concerning co-location on the same node and the same is the case for dependencies.

You seem to be looking for a Kubernetes like Pod abstraction in marathon, which is on the roadmap but not yet available (see update above :-)).

Hope this helps!

js84
  • 3,676
  • 2
  • 19
  • 23
  • Thanks for the help. This is very basic feature, I suppose that it should be present in the Marathon. Anyway, any idea when Kubernetes will release Pod abstraction. – Suyash Singh Dec 17 '15 at 10:24
0

I think this should be possible (as a workaround) if you specify the correct app contraints within the group's JSON.

Have a look at the example request at

and the constraints syntax at

e.g.

"constraints": [["hostname", "CLUSTER", "slave-1"]]

should do. Downside is that there will be no automatic failover to another slave that way. Still, I'd be curious why both apps need to specifically run on the same slave node...

Tobi
  • 31,405
  • 8
  • 58
  • 90
  • You can fix an app group to a pre-selected node with this approach, but if I understood it correctly this is not the desired behavior (which should be apply all apps on any node, but always together). Especially with your constraints if slave-1 fails you will have no failover. – js84 Dec 18 '15 at 08:48
  • I think you're probably correct. I just wanted to shown that something similar is possible, yet with the downsides you stated. – Tobi Dec 18 '15 at 08:53
  • Furthermore, to me it's unclear why both apps need to run on the same node. The only thing I can imagine is shared host disk resources (which could also be solved via persistent volumes)... It's somehow against Mesos concepts IMHO. – Tobi Dec 18 '15 at 09:18