6

I want to deploy some services into my server and all of them will use nginx as web server, every project has it own .conf file and I want to share all of then with nginx container. I tried to use named volumes but when it's used by more than one container the data gets replaced. I want to get all this .conf files from diferent containers and put in a volume so it can be read by nginx container. I also tried to use subdirectories in named volumes, but, use namedVolumeName/path do not work.

Obs: I'm using docker-compose in all projects

version: "3.7"

services:
  backend:
    container_name: jzmimoveis-backend
    image: paulomesquita/jzmimoveis-backend
    command: uwsgi --socket :8000 --wsgi-file jzmimoveis/wsgi.py
    volumes:
      - nginxConfFiles:/app/nginx
      - jzmimoveisFiles:/app/src
    networks:
      - jzmimoveis
    restart: unless-stopped
    expose:
      - 8000

  frontend:
    container_name: jzmimoveis-frontend
    image: paulomesquita/jzmimoveis-frontend
    command: serve -s build/
    volumes:
      - nginxConfFiles:/app/nginx
    networks:
      - jzmimoveis
    restart: unless-stopped
    expose:
      - 5000

volumes:
  nginxConfFiles:
    external: true
  jzmimoveisFiles:
    external: true
networks:
  jzmimoveis:
    external: true

For example, is this case i linked both frontend and backend nginx file to the named volume nginxConfFiles, but, when I do docker-compose up -d in this file, just one of the .conf file appears in volume, I think it gets overwritten by the other container in the same file.

  • If you deal with deployment, i hardly suggest you to use a container orchestrator and not docker-compose (use K8s or Docker swarm for example). Now for your question could you please provide us some sample code of your implementation. As you explained here it is a bit difficult to understand if there was any errors in the implementation... – jossefaz Jun 28 '20 at 06:30

2 Answers2

9

Probably you could have, on the nginx container, the shared volume pointing to /etc/nginx/conf.d, and then use different names for each project conf file.

Below a proof-of-concept, three servers with a config file to be attached on each one, and a proxy (your Nginx) with the shared volume bound to /config:

version: '3'

services:
  server1:
    image: busybox:1.31.1
    volumes:
    - deleteme_after_demo:/config
    - ./server1.conf:/app/server1.conf
    command: sh -c "cp /app/server1.conf /config; tail -f /dev/null"

  server2:
    image: busybox:1.31.1
    volumes:
    - deleteme_after_demo:/config
    - ./server2.conf:/app/server2.conf
    command: sh -c "cp /app/server2.conf /config; tail -f /dev/null"

  server3:
    image: busybox:1.31.1
    volumes:
    - deleteme_after_demo:/config
    - ./server3.conf:/app/server3.conf
    command: sh -c "cp /app/server3.conf /config; tail -f /dev/null"

  proxy1:
    image: busybox:1.31.1
    volumes:
    - deleteme_after_demo:/config:ro
    command: tail -f /dev/null

volumes:
  deleteme_after_demo:

Let's create 3 config files to be included:

➜ echo "server 1" > server1.conf
➜ echo "server 2" > server2.conf
➜ echo "server 3" > server3.conf

then:

➜ docker-compose up -d                  
Creating network "deleteme_default" with the default driver
Creating deleteme_server2_1 ... done
Creating deleteme_server3_1 ... done
Creating deleteme_server1_1 ... done
Creating deleteme_proxy1_1  ... done

And finally, let's verify the config files are accessible from proxy container:

➜ docker-compose exec proxy1 sh -c "cat /config/server1.conf"
server 1

➜ docker-compose exec proxy1 sh -c "cat /config/server2.conf"
server 2

➜ docker-compose exec proxy1 sh -c "cat /config/server3.conf"
server 3

I hope it helps. Cheers!

Note: you should see mounting a volume exactly the same way as using Unix mount command. If you already have content inside the mount point, after mount you are not going to see it, but the content of the mounted device (unless it was empty and first created here). Whatever you want to see there needs to be already on the device or you need to move it afterward.

So, I did it by mounting the files because I had no data in the container I used. Then copying these with the starting command. You could address it a different way, eg copying the config file to the mounted volume by the use of an entry point script in your image.

Ictus
  • 1,435
  • 11
  • 20
  • They have different names, but get overwritten when another container is initialized in the same named volume – Paulo Mesquita Jun 28 '20 at 21:42
  • 1
    I updated my answer with a proposal. Is perhaps, trying this way, applicable to you? – Ictus Jun 28 '20 at 21:59
  • When I run that file that's in my question, the volume stays only with backend file and if I enter inside the frontend container, that's a backend.conf inside of it. Maybe because I am not linking the .conf file to the container and it's already there? Didn't find what's wrong with my file – Paulo Mesquita Jun 28 '20 at 23:54
  • This won't work. If you run it, you should see that the `server*.conf` files in proxy1 are all empty no matter what is in each of the conf files you mounted from the host. These files are the filesystem stubs for the bind mounts happening in the other containers, but those bind mounts are container specific. The contents of server1.conf are only visible in server1. – BMitch Jun 29 '20 at 19:03
  • @BMitch You are right ... Thank you for pointing this out. I should have done deeper testing. I'm updating my post with proper changes. – Ictus Jun 29 '20 at 20:00
  • Now I don't mount it inside the mount point, which didn't work as @BMitch showed, but in the container command I'm moving the config file – Ictus Jun 29 '20 at 20:24
  • 1
    Yup, you've got a lightweight version of the `load-volume` script posted in my answer, but using another volume as a the source. These all come back to needing an entrypoint to copy since the functionality isn't available anywhere else. – BMitch Jun 29 '20 at 20:39
5

A named volume is initialized when it's empty/new and a container is started using that volume. The initialization is from the image filesystem, and after that, the named volume is persistent and will retain the state from the previous use.

In this case, what you have is a race condition. The volume is sharing the files, but it depends on which container compose starts up first to control which image is used to initialize the volume. The named volume is shared between multiple images, it's just the content that you want to be different.

For your use case, you may be better off putting some logic in the image build and entrypoint to save the files you want to mirror in the volume to a different location in the image on build, and then update the volume on container startup. By moving this out of the named volume initialization steps, you avoid the race condition, and allow the volume to be updated with future changes from the image. An example of this is in my base image with the save-volume you'd run in the Dockerfile, and load-volume you'd run in your entrypoint.

As a side note, it's also a good practice to mount that named volume as read-only in the containers that have no need to write to the config files.

BMitch
  • 231,797
  • 42
  • 475
  • 450