tl;dr
- application code requires a build-step (pull in dependecies)
- multiple containers need the same "built" code
Q: what's a good strategy / workflow to archive that with docker / docker-compose.
Long
We're in the process of dockerizing a PHP application with mutiple components (containers/services), e.g.
- Worker nodes (PHP processes kept alive via supervisor)
- Scheduler (governing the workers and running recurring tasks in cron)
- PHP-FPM/Nginx (web interface)
The services are defined in a docker-compose file. During development, we're mounting the application code via volume from a directory on the host in each container, so that we see the changes "immediately" in each service (Example). Life was good.
We are now setting up a CI/CD environment based on Jenkins that should build (+ test) the containers and push the to the registry afterwards. Since the "mount from host" is no longer possible, I'm now wondering what the best way is to bring the application code in each container.
Two things in our setup make this imho particularly complicated:
- we have multiple containers that need the same application code
- the "build-artifact" is not a single, self-container binary (as we would have e.g. with go) but "all of our code + the installed dependecies" ==> "a lot of files" (slow...)
- there is a build step involved that requires software that is not needed in the final image
The solution for "3." is usually: Use multi stage builds. We do that. But: all the examples out there seem to assume the built code will only be used in one other container (which isn't true in our case, see 1.)
What we currently do
- folder structure
application-code/
.docker/
builder/
Dockerfile
php-fpm/
Dockerfile
docker-compose.yml
build.sh
index.php
- introduce an additional "builder" container that "builds" the application (gets "all" application code as build context; runs "composer install")
# ./builder/Dockerfile
COPY ./ /codebase
RUN cd /codebase && composer install
- "copy" from this builder in each container that needs the application code, e.g. via
# ./php-fpm/Dockerfile
ARG APP_CODE_PATH="/var/www/current"
COPY --from=builder --chown=www-data /codebase ${APP_CODE_PATH}
- orchestrated via docker-compose
# ./docker-compose.yml
version: '3.7'
services:
builder-ci:
image: builder
build:
# ../ contains the "raw" application code
context: ../
dockerfile: ./.docker/builder/Dockerfile
php-fpm:
build:
context: .
dockerfile: ./php-fpm/Dockerfile
args:
- APP_CODE_PATH=/var/www/current
- build via
# build.sh
## build builder
docker-compose -f ./.docker/docker-compose.yml --project-directory ./.docker build builder
## build the rest
docker-compose -f ./.docker/docker-compose.yml --project-directory ./.docker build --parallel
Pro
- "smaller" images (php-fpm won't have composer installed)
- application code is only build once and then copied over
Contra
- the builder container serves no other purpose but to "built" ==> doesn't feel clean
- building the "builder" has to be done before any other container is built
- that means we have an additional
Alternatives
- don't use a builder container but "incorporate" the build step in e.g. the "Scheduler" container
by using multi-stage builds (so we don't end up with composer in the final image)
- get's rid of "builder" - but now all other services depend on "Scheduler" ==> feels even more dirty
- use a volume to share the code
- we don't have to "copy" files in images but can simply "mount the volume" ==> feels "clean" / no "duplication of files" (I first thought that this is a really good approach...)
- BUT:
- you can't populate volumes during build, so you need to "run" the container in order to get the appliaction code "in" the container ==> we suddenly not only have a builder container but we also need to "run" it to populate the volume
- the containers are not "self contained" any more, i.e. pulling "just the Scheduler" from the registry won't work - we MUST also have the volume in place AND it has to be populated by the builder ==> orchestration becomes more complicated
- the volume is not ephemeral, i.e. it will contain "old" application code until it is refreshed ==> this might lead to confusion and unexpected behavior
Links
- Can containers share a framework? (related question)
- https://github.com/moby/moby/issues/14080 (mountined volumes during build is not supported)
- https://docs.docker.com/develop/develop-images/multistage-build/ (builder pattern vs multi stage build)