42

I'm currently struggling with the deployment of my services and I wanted to ask, what's the proper way when you have to deal with multiple repositories. The repositories are independent, but to run in production, everything needs to be launched.

My Setup:

  • Git Repository Backend:
    • Backend Project Rails
    • docker-compose: backend(expose 3000), db and redis
  • Git Repository Frontend
    • Express.js server
    • docker-compose: (expose 4200)

Both can be run independently and test can be executed by CI

  • Git Repository Nginx for Production
    • Needs to connect to the other two services (same docker network)
    • forwards requests to the right service

I have already tried to include the two services as submodules into the Nginx repository and use the docker-compose of the nginx repo, but I'm not really happy with it.

Mathias Aichinger
  • 883
  • 2
  • 8
  • 16

1 Answers1

29

You can have your CI build and push images for each service you want to run, and have the production environment run all 3 containers.

Then, your production docker-compose.yml would look like this:

lb:
  image: nginx
  depends_on:
    - rails
    - express
  ports: 80:80

rails:
  image: yourorg/railsapp

express:
  image: yourorg/expressapp
   

Be noted that docker-compose isn't recommended for production environments; you should be looking at using Distributed Application Bundles (this is still an experimental feature, which will be released to core in version 1.13)

Alternatively, you can orchestrate your containers with a tool like ansible or a bash script; just make sure you create a docker network and attach all three containers to it so they can find each other.

Edit: since Docker v17 and the deprecation of DABs in favour of the Compose file v3, it seems that for single-host environments, docker-compose is a valid way for running multi-service applications. For multi-host/HA/clusterised scenarios you may want to look into either Docker Swarm for a self-managed solution, or Docker Cloud for a more PaaS approach. In any case, I'd advise you to try it out in Play-with-Docker, the official online sandbox where you can spin out multiple hosts and play around with a swarm cluster without needing to spin out your own boxes.

Maarti
  • 3,600
  • 4
  • 17
  • 34
gvilarino
  • 672
  • 5
  • 12
  • thank you for your answer. I will look more into container orchestration tools like kubernetes or docker swarm. – Mathias Aichinger Jan 28 '17 at 15:54
  • I can recommend the [Docker official trainings](http://training.play-with-docker.com) to quickly understand how to do this stuff in a sandboxed, 0-setup environment. Also, with `docker-compose` v 3+ it is now possible to deploy to a swarm cluster with [`docker stack deploy`](https://docs.docker.com/engine/reference/commandline/stack_deploy/#examples). Make sure you read the docs regarding the new `deploy` option for a compose service. – gvilarino Apr 07 '17 at 19:55
  • Does `docker-compose` update automatically when one of its images gets updated? – AdamGold Jan 12 '20 at 14:50
  • `docker-compose` automatically pull images that aren't available locally. If your compose service isn't using `image` but `build`, and you have re-built the image with `docker-compose build`, then a `docker-compose up` will re-create the container(s) associated to that image. If an image is available locally with the same `repo/name:tag` than the one declared in the compose file, then no. If you're unsure, just do `docker-compose pull` before `up` – gvilarino Jan 13 '20 at 17:05