We have a use case where from a principal Nomad as cluster-scheduler, together with Consul fits very well, but we have some questions what would be the best approach to implement this:
A single instance of a service (Docker-instance, could also be a native process) is shared by a group of people. Every instance is uniquely identified with an ID, this ID will be passed as part of the request (e.g. HTTP-header). Scheduling of this workload is perfectly doable with Nomad. Service instances are registered in Consul.
This pattern is somehow similar how some Game-Servers are handled, where a group of people share an instance of a Game-Server. Unfortunately we were not able to find information, how this would be effectively build with Nomad/Consul.
We have the following questions
- We plan to create for every unique process a dedicated Nomad job (based on a common template); the job name will contain the unique ID and if needed, additional meta data to identify them in Consul. Is this the right approach to handle the "uniqueness" here?
- We assume that we have to create a custom proxy/(layer-7) router to deal with verification if job is already started and start it on demand. Any recommendation for a proxy which can be extended with such functionalities and integrate nicely in the Nomad/Consul ecosystem as alternative to build that from scratch?
- What is the most efficient way to check if a Nomad job is already started? Simply route request (based on unique ID) to the service, represented by the "unique" job and if request fail start job? Or make an explicit DNS lookup for every request (DNS will be maintained/provided by Consul)?
Thank you. Any help are highly appreciated.