In my experience, I found out that using GitHub's container
instruction causes more confusion than simply running whatever you want on the runner itself, as if you are running it on your own machine.
A big majority of the tests I am running on GitHub actions are running in containers, and some require private DockerHub images.
I always do this:
- Create a
docker-compose.yml
for development use, so I can test things locally.
- Usually in CI, you might want slightly different things in your
docker-compose
(for example, no volume mappings) - if this is the case, I am creating another docker-compose.yml
in a .ci
subfolder.
- My
docker-compose.yml
contains a test
service, that runs whatever test (or test suite) I want.
Here is a sample GitHub actions file I am using:
name: Test
on:
pull_request:
push: { branches: master }
jobs:
test:
name: Run test suite
runs-on: ubuntu-latest
env:
COMPOSE_FILE: .ci/docker-compose.yml
DOCKER_USER: ${{ secrets.DOCKER_USER }}
DOCKER_PASS: ${{ secrets.DOCKER_PASS }}
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Login to DockerHub
run: docker login -u $DOCKER_USER -p $DOCKER_PASS
- name: Build docker images
run: docker-compose build
- name: Run tests
run: docker-compose run test
Of course, this entails setting up the two mentioned secrets, but other than that, I found this method to be:
- Reliable
- Portable (I switched from Travis CI with the same approach easily)
- Compatible with dev environment
- Easy to understand and reproduce both locally and in CI