I'm running a pipeline with GitLab CI, and I've set up a server with a runner, following GitLab's documentation (https://docs.gitlab.com/runner/install/linux-manually.html, and https://docs.gitlab.com/runner/register/index.html).
The runner is set up with a shell executor, and part of the script sets up two Docker containers to build and serve the project.
Everything seems to work fine, except that the gitlab-runner
user seems to move or remove files between stages, and this doesn't work after the build process has created files from within the Docker container.
The files created by the Docker container are owned by the user in the container, and thus gitlab-runner
doesn't have access to them. Because of this I have a couple of questions:
- Are there any best practices when it comes to building stuff inside containers?
- Is there any way to ensure from within the container that the files are owned by the user id of a user on the host machine? (I've experimented with setting an environment variable to the user id of the
gitlab-runner
user, butchown
doesn't seem to work well with environment variables)
If possible, I would like to keep the build process within the container.
I've found countless articles and answers to kind of similar questions where the conclusion is to give the gitlab-runner
user root permissions or similar, and I cannot see how that can be advisable, or the proper way to do it.
.gitlab-ci.yml
stages:
- setup
- testing
- build
before_script:
- export USERID=$(id | grep -Po '(?<=uid\=)(\d*)')
setup:
stage: setup
environment: development
tags:
- mytag
only:
- feature/gitlab-ci
- development
- master
script:
- cd docker
- cp .env.prod.example .env
- cp nginx-prod/.env.example nginx-prod/.env
- cp node-prod/.env.example node-prod/.env
- docker-compose -f docker-compose.prod.yaml build
- docker-compose -f docker-compose.prod.yaml down
- docker-compose -f docker-compose.prod.yaml up -d --force-recreate
- docker exec nodejs_1 sh -c "cd /usr/src && npm i"
- docker exec nodejs_1 sh -c "cd /usr/src && bower install"
- docker exec nginx_1 sh -c "chown -R $CHOWNUID:$CHOWNUID /usr/share/nginx/html/bower_components"
- docker exec nginx_1 sh -c "chown -R $CHOWNUID:$CHOWNUID /usr/share/nginx/html/node_modules"
testing:
stage: testing
environment: development
tags:
- mytag
only:
- feature/gitlab-ci
- development
- master
script:
- cd docker
- 'echo "Running tests"'
build:
stage: build
environment: production
tags:
- mytag
only:
- feature/gitlab-ci
- master
when: manual
script:
- cd docker
- docker exec nodejs_1 sh -c "cd /usr/src && grunt build"
docker-compose.prod.yaml
version: '3'
services:
nginx:
build: nginx-prod
environment:
- CHOWNUID=${USERID}
env_file:
- 'nginx-prod/.env'
ports:
- '90:80'
volumes:
- '..:/usr/share/nginx/html:cached'
- './logs/nginx:/var/log/nginx:cached'
nodejs:
build: node-prod
user: node
environment:
- CHOWNUID=${USERID}
env_file:
- 'node-prod/.env'
tty: true
volumes:
- '..:/usr/src:cached'
Update 1:
It seems that chown
in fact does work well with environment variables. I'm just having issues with running docker exec
with an environment variable without the shell thinking its from the host machine (it tries to resolve the environment variable before sending the command to the container).
Update 2: Turns out it was a syntax error on my part. I was enclosing the command in double quotes, but needed to enclose it in single quotes.