I have the same issue in my workflows when working with a mix of services, docker actions and plain old docker commands in shell steps. At the moment, I only see two possible workarounds:
- Run everything on the host network. Services can publish their ports on the host. using the
ports
field. For example:
services:
redis:
image: redis
ports:
- 6379:6379
In your other steps and actions you can pass the --network "host"
option to docker. To access any of the containers just call localhost:port
. This will work from the shell of your steps and from within the containers. It's the simplest solution. Unfortunately, you might have collisions between services and I don't really know if there is any serious security implications doing this on github hosted runners.
- You can start your containers in the network created by the runner using
--network ${{ job.container.network }}
. By passing --cidfile $CID_FILE
you can store the id of the container in a file $CID_FILE
. From there you can use docker inspect and output the container IP address. By doing so, even if the names of the containers don't resolve you can still connect from one container to the other using IP addresses. Here is how it can be implemented in a simple action:
name: Docker start container
description: Start a detached container
inputs:
image:
description: The image to use
required: true
name:
description: The container name
required: true
options:
description: Additional options to pass to docker run
required: false
default: ''
command:
description: The command to run
required: false
default: ''
outputs:
cid:
description: Container ID
value: ${{ steps.info.outputs.cid }}
address:
description: Container ID
value: ${{ steps.info.outputs.address }}
runs:
using: composite
steps:
- name: Pull
shell: bash
run: docker pull ${{ inputs.image }}
- name: Run
env:
CID_FILE: ${{ inputs.name }}.cid
shell: bash
run: >
docker run -d
--name ${{ inputs.name }}
--network host
--cidfile $CID_FILE
${{ inputs.options }}
${{ inputs.image }}
${{ inputs.command }}
- name: Info
id: info
shell: bash
run: |
export CID=$(cat $CID_FILE)
export ADDR=$(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $CID)
echo "::set-output name=cid::$CID"
echo "::set-output name=address::$ADDR"
This is probably safer but outside the added complexity, it has one major drawback: the container addresses are not known when the job starts. It means that you cannot pass any of these IP addresses to other containers using job-wide environment variables.