I ma currently working on a CI/CD pipeline on Gitlab and I am using a containerised Gitlab runner (Docker-in-Docker situation) I am trying to create a test job for the application I am developing (a miniature Fast API web app that scrapes some images over the network based on GET requests)
The app is also containerised.
I need to build the app image, run the app, send some requests to it, let it store the scraped images and then check for the saved images in the location the app aved them.
Sounded easy until I found out that I can't mount the directories in the Gitlab runner, since I am using a mounted /var/run/docker.sock
in the Gitlab runner so that the docker can only mount directories from the host machine (which the Gitlab runner has access to)
So I did a lot of reading and found out some information and so far I can see a solution by identifying the container I am running in (the Gitlab runner one) and mounting its volumes into the app container using --volumes-from
. The issue is, that it (to my knowledge) mounts all the volumes, but I need only to mount one... the one I added to the Gitlab runner config.toml
file as the working volume. Also I need to be able to write into that one volume but leave the other ones intact (probably for safety reasons... i am not sure but i don't think I should give access to the /cache volume or /certs volume of a gitlab runner or smth)
Is there a way to only mount a certain volumes from a running container so that I can access the files from both the runner and app container?
Am I able to specify a named volume for the Gitlab runner in the config.toml
file so that I can access the files generated by the app throughtout the job or the whole pipeline?
OR, if i am doing something very stupid, is there a more elegant way of testing my app's side-effects and functionality using the Gitlab CI/CD pipelines?