6

I'm putting together a docker-compose.yml file to run the multiple services for a project I'm working on. This project has a Magento and Wordpress website residing under the same domain, with that "same domain" aspect requiring a very simple nginx container to route requests to either service.

So I have this architected as 4 containers (visualisation):

  • A "magento" container, using an in-house project-specific image.
  • A "wordpress" container, using an in-house project-specific image.
  • A "db" container running mysql:5.6, with the init db dumps mounted at /docker-entrypoint-initdb.d.
  • A "router" container running nginx:alpine with a custom config mounted at /etc/nginx/nginx.conf. This functions as a reverse-proxy with two location directives set up. location / routes to "magento", and location /blog routes to "wordpress".

I want to keep things simple and avoid building unnecessary custom images, but in the context of the "router" I'm not sure what I'm doing is the best approach, or if that would be better off as a project-specific image.

I'm leaning toward my current approach of mounting a custom config into the nginx:alpine container, because the configuration is specific to the stack that's running – it wouldn't make sense as a single standalone container.

So the two methods, without a custom image we have the following in docker-compose.yml

      router:
        image: nginx:alpine
        networks:
          - projectnet
        ports:
          - "80:80"
        volumes:
           - "./router/nginx.conf:/etc/nginx/nginx.conf"

Otherwise, we have a Dockerfile containing the following, as I've seen suggested across the internet and in other StackOverflow responses.

    FROM nginx:alpine

    ADD nginx.conf /etc/nginx/

Does anybody have arguments for/against either approach?

Rob Jackson
  • 138
  • 7

1 Answers1

2

If you 'bake in' the nginx config (your second approach)

ADD nginx.conf /etc/nginx/

it makes your docker containers more portable - i.e. they can be downloaded and run on any server capable of running docker and it will just work.

If you use option 1, mounting the config file at run time, then you are transferring one of your dependencies to outside of your container. This makes it a dependency that must be managed outside of docker.

In my opinion it is best to put as many dependencies inside your dockerfiles as possible because it makes them more portable and more automated (great for CI Pipelines for example)

There are reasons for mounting files at run time and these are usually centred around environmentally specific settings (although these can largely be overcome within docker too) or 'sensitive' files that application developers shouldn't or couldn't have access to. For example ssl certificates, database passwords, etc

GreensterRox
  • 6,432
  • 2
  • 27
  • 30
  • Thanks, that makes sense. I've been playing with Docker Cloud today; seeing how stacks are defined and spun up & down, and I definitely understand what you're talking about re: portability: All I need to do in Docker Cloud to run a stack is pass in a `docker-compose.yml`. It'd be a whole layer of extra faff if my deployment process, for on-demand images, also required me to scp over various additional config values, rather than just having them baked in. – Rob Jackson Jun 09 '17 at 14:48