My CI pipeline fails at the final destroy stage of running molecule test
because the default timeout for closing a Docker container is not big enough.
Here is the error I get:
msg: 'Error removing container c6fff0374c2d8dc2b20ed991152ce8db5bbdf05a635c26648ce3c0a82c491eb2: UnixHTTPConnectionPool(host=''localhost'', port=None): Read timed out. (read timeout=60)'
It seems that my containers are too big and/or my CI runner machine not powerful enough for this to be done within the 60 seconds default timeout.
Here are advices I a found on this topic:
- restart docker service:
systemctl start docker
- change the tiemout using environment variables:
export DOCKER_CLIENT_TIMEOUT=120
export COMPOSE_HTTP_TIMEOUT=120
Restarting docker doesn't solve my issue and is not convenient on my CI runner anyway.
I tried adding environment variables like such in molecule.yml:
provisioner:
name: ansible
env:
MOLECULE_NO_LOG: "false"
DOCKER_CLIENT_TIMEOUT: "240"
COMPOSE_HTTP_TIMEOUT: "240"
But Docker doesn't seem to get them since I still get the same error message specifiying (read timeout=60)
.
To no avail I also tried to define them in the driver section of in molecule.yml:
driver:
name: docker
env:
DOCKER_CLIENT_TIMEOUT: "240"
COMPOSE_HTTP_TIMEOUT: "240"
The only way I get my job to end successfully is by running the tests against a single host at the time, which I guess reduces the ressources needed for my CI runner to close the containers within 60s. However it is not an apporiate solution since it requires to artificially complexify my jobs definition.
Isn't there a way to provide environment variables to the Docker driver ?
For the record this is my setup:
- Python 3.6.8
- ansible 2.10.3
- molecule 3.2.0 using python 3.6
- ansible:2.10.3
- delegated:3.2.0 from molecule
- docker:0.2.4 from molecule_docker
- Docker version 19.03.14, build 5eb3275d40
- GitLab Community Edition 13.7.1
- gitlab-runner 13.6.0