3

I have the following containers:

  • nginx:latest
  • myapp container (derived from php-fpm:alpine)

Currently I have a dummy project with CI pipeline in place which, build-time, compiles production variant of resources (images/js/css,...). Build files end up in (/public/build). At the very end of CI pipeline, I package everything into Docker images and upload it to Hub.

Both nginx and myapp do have volume (not bind mount) set up and pointing to /opt/ci-test/public/build.

This works, for the first time.

But let's say that I add a new file new.css - my new version of docker image will contain a build variant of new.css.

Running a new container with pre-existing volume does not reveal new files and I understand that it should not.. I can create a new volume my_app_v2.

At this point nginx does not see this new volume and it must be removed and re-run (with new volume) for it to take effect.

Is there an easy way to overcome this?

My intention is to use nginx container for multiple PHP apps and I need to refrain from killing it whenever I update one of the apps being served. Is this a bad decision?

EDIT:

One workaround I have managed to dig out is to remove all files from attached volume and start new myapp container. This mirrors all the latest files to the volume. But this feels dirty...

EDIT2:

Related issue (case 3): https://github.com/moby/moby/issues/18670#issuecomment-165059630

EDIT3:

Dockerfile

FROM  php:7.2.30-fpm-alpine3.11

COPY . /opt/ci-test
WORKDIR /opt/ci-test

VOLUME /opt/ci-test/public/build

So far, I do not have docker-composer and I run the containers manually via commands:

docker run -it -d --name php71alp -v shr_test:/opt/ci-test/public/build -p 9000:9000 <myaccount>/citest
docker run -it -d --name nginx -v shr_test:/var/www/citest -p 80:80 nginx:latest
Jovan Perovic
  • 19,846
  • 5
  • 44
  • 85
  • Are you using docker-compose? Can you tell how is your environment? – Dilson Rainov May 11 '20 at 13:12
  • 1
    Post your `docker-compose` and `Dockerfile` details – Tarun Lalwani May 11 '20 at 16:11
  • Hey guys, sorry for the delay in answer. @Dilson: I still do not use `docker-compose`, so I run both `nginx` and my container via `docker run` commands. @Tarun: I have added `Dockerfile` for my app. It is really really simple, as it is test image intended for POC purposes. Thank you! – Jovan Perovic May 13 '20 at 17:30

2 Answers2

1

First option: don't use a volume. If you want to have the files accessible from the image build, and don't need persistence, then the volume isn't helping with your workflow.

Second option: delete the previous volume between runs and use a named volume, which docker will initialize with the image contents.

Third option: modify the image build and container entrypoint to save the directory off to a different location during the build, and restore that location into the volume on container startup in the entrypoint. I've got an implementation of this in the save-volume and load-volume scripts in my base image. It gets more complicated when you want to merge the contents of the volume with the contents of the host, and you'll need to decide how to handle files getting deleted and what changes to save from the previous runs.

BMitch
  • 231,797
  • 42
  • 475
  • 450
  • Thank you BMitch! The only reason I needed to expose the `build` is so `nginx` could serve those static files directly. I didn’t want to pass static requests to php front controller. – Jovan Perovic May 13 '20 at 18:25
  • The 3rd option seems like one closest to what I am trying to achieve. Let me look over that one. From the looks of my setup, merge will not be required - that is why wiping the volume clean does do the job (by feels dirty) – Jovan Perovic May 13 '20 at 18:29
1

Simply do not use a volume for this.

You should treat docker images as "monolithic packages" that contain your dependencies (nginx) and your app's files (images, js, css...). There's no need to treat your app's files any differently than nginx itself, it's all part of the single docker image.

Without a volume, you run v1 of your image, nginx sees the v1 files. You run v2 of your image, nginx sees the v2 files.

Volumes are intended to be used when you actually want to keep files between container versions (such as databases, file uploads...). Not for your site's static assets.

My intention is to use nginx container for multiple PHP apps and I need to refrain from killing it whenever I update one of the apps being served. Is this a bad decision?

Yes, this is bad design. If you want to run multiple apps, you should run 1 Docker container per app. That way, when you release a new version of one app, you only need to restart that container. Containers aren't supposed to be treated like traditional virtual machines where you "SSH into" and manually configure things. Containers are "throw-away". New version of the app? just replace the container with a new one with a newer image.

Dirbaio
  • 2,921
  • 16
  • 15
  • I see what you mean, but I wanted to eliminate unnecessary trip to FPM and passing the request through the front controller when in fact I know those cannot be PHP requests. That makes perfect sense from performance standpoint. About the second part of the answer: how am I then supposed to host multiple apps (yes, separate FPM containers)? I still need single nginx (listening on port 80/443) container in front of them, don’t I? – Jovan Perovic May 16 '20 at 08:41
  • For hosting multiple apps, the way I do that is to run the containers binding to localhost:8000, localhost:8001, etc. and use a single nginx in the host machine (not a container) with many virtual servers doing proxy_pass to the right container port. – Dirbaio May 16 '20 at 14:12
  • For the nginx vs fpm thing, I recommend you run both from the same container, so you have 1 image per app, not 2. It's easier to manage. You can still configure nginx to directly serve static files, and pass only the PHP requests to FPM. Use something like supervisord to run 2 processes in the same container. With this way, static file requests do (host nginx) -> (container nginx) -> (file). PHP requests do (host nginx) -> (container nginx) -> (FPM) -> (PHP files, DB..) – Dirbaio May 16 '20 at 14:15
  • Another advantage of running an nginx on the host is you can use it for TLS termination. If you want to do it with certbot (let's encrypt) you have to manage just one instance on the host, which is easier. Also, don't worry about the extra host nginx trip, nginx reverse proxying is super fast and the extra flexibility you get (for example being able to run multiple apps easily) is well worth it. – Dirbaio May 16 '20 at 14:17
  • Here's an example on how I do the nginx+fpm in a single container: [Dockerfile](https://github.com/Dirbaio/NSMBHD/blob/master/Dockerfile) and [config files](https://github.com/Dirbaio/NSMBHD/tree/master/conf). Feel free to copy it (BSD license). – Dirbaio May 16 '20 at 14:19
  • Thank you @Dirbaio. While I didn't in particular like the idea of having `supervisord` managing the services within single container, the idea of having `(host nginx -> docker nginx -> docker fpm)` did sound alright. And after attempting to do that, I succeeded. – Jovan Perovic May 19 '20 at 11:01