I've built a Swarm-based Docker setup for our on-prem DevOps pipelines. Docker-based pipeline agents are started and can perform build operations. I also have a few of those agents capable of building new docker images - this was enabled by binding \\.\pipe\docker_engine
from the host with these containers.
This generally works... however in case there's an issue with the build process it'll likely leave a lot of garbage behind. This can partially be alleviated by using --force-rm
. But ideally I'd like to have the containers clean up after themselves so that the next run is "pristine" regardless of what was ran inside it. I'd also like to let these special containers launch new containers for more complex CI pipelines, but again - I'm worried about them not cleaning up after themselves. Note: I'm less worried about security since this is all "in-house" stuff.
Is it possible to have a container which can launch nested containers inside of itself whilst making sure that if this top container is stopped & removed then all of the stuff created by this container will be stopped & removed as well?