You're trying to tell Compose to create containers in two different Docker instances. One container should go in the normal host Docker environment, but you're asking for the other to be run in the Docker-in-Docker environment. Compose doesn't have any way to do this; it expects everything it creates (containers, networks, volumes) to belong to "the same" Docker.
To make this work, you should make a normal Compose setup that will run your application. Of particular note, since you're planning to run this in an environment that's not your local system, make sure you don't have any volumes:
that reference local files. This is the same Compose setup that could run the application locally, or in principle against a remote Docker.
version: '3.8'
services:
myapp:
image: foox/myapp:1.1.0
ports:
- 8080:8080
You can run docker-compose up -d
on your host system normally, and this should be enough for most normal uses -- you almost never need Docker-in-Docker and in my experience it is very unusual to see it at all.
If you do need DinD, you need to start the Docker daemon container, and then you need to tell the various Docker tools how to reach it. If you wanted to launch it via Compose, you'd need a separate Compose file that only included the DinD container
version: '3.8'
services:
docker:
image: docker:dind
privileged: true
ports: ['127.0.0.1:12375:2375']
Then you need to start this nested Docker daemon and point Compose at it.
docker-compose -f docker-compose.dind.yaml up -d
DOCKER_HOST=tcp://localhost:12375 docker-compose up -d
Note that this setup allows unencrypted and unauthenticated access to the nested Docker daemon; while it's running in a container, it is a privileged container and there is still some damage that can be done by any local process. The Docker Hub docker
image page describes a more robust setup with TLS certificates, and an even better approach of not publishing ports:
out of the DinD container at all.
If all of this makes the Docker-in-Docker setup sound hard to use, it is, and conventional advice is to avoid it, even in CI systems where it might seem useful to give each build its own fully-isolated Docker setup.