When one mounts a docker socket into a container, s/he's giving the container control over the instance of docker running on the host.
You can think of this as analogous to giving multiple containers the URL of a website. Whatever an analogous containers does affects the analogous website, and those changes will be visible to all.
In concrete terms, under this setup, there is no hierarchy of containers: Just multiple containers controlling the same docker daemon that could have otherwise been controlled via the docker
command on the host. Each container will have an equal ability to do pretty much anything on the host, via the docker daemon, and the isolation provided by containers by default will almost entirely be subverted.
For this reason, docker in docker has some inherent security implications, but more on those can be found elsewhere on the internet.
To answer your second question, copying files between containers involves running a command that emits the contents of the file in the source container, and piping it to a command that writes to the destination file in the destination container. For example:
docker exec container1 cat /source/file | docker exec -i container2 bash -c "cat > /dest/file"
Copying multiple files could involve creating a tarball in the source and expanding it in the other:
docker exec container1 tar -C /source -c dir| docker exec -i container2 tar -C /dest -x
As a convenience, and for situations where a shell or tar
are not available within a container, docker cp
can be used to copy files from a container to the host. By copying a file from a source container to the host, and then to the destination container, you can arrive at the same solution, at the cost of a a bit of temporary storage. For example:
docker cp container1:/source/file file
docker cp file container2:/dest/file
You were onto something, though. An alternative would be to mount a directory from the host into both containers and communicate via that directory. For example:
mkdir shared
docker run -d --name=container1 -v $PWD/shared:/mnt/share image command
docker run -d --name=container2 -v $PWD/shared:/mnt/share image command
Typos notwithstanding, the example above will result in two containers becoming available. The processes in those two containers may share files via /mnt/share
with will be backed by the shared/
directory on the host.