5

I have a Dockerfile with a classic Ubuntu base image and I'm trying to reduce the size. That's why I'm using Alpine base.

In my Dockerfile, I have to install Docker, so Docker in Docker.

FROM alpine:3.9 

RUN apk add --update --no-cache docker

This works well, I can run docker version inside my container, at least for the client. Because for the server I have the classic Docker error saying :

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

I know in Ubuntu after installing Docker I have to run

usermod -a -G docker $USER

But what about in Alpine ? How can I avoid this error ?

PS:

My first idea was to re-use the Docker socket by bind-mounting /var/run/docker.sock:/var/run/docker.sock for example and thus reduce the size of my image even more, since I don't have to reinstall Docker.

But as bind-mount is not allowed in Dockerfile, do you know if my idea is possible and how to do it ? I know it's possible in Docker-compose but I have to use Dockerfile only.

Thanks

iAmoric
  • 1,787
  • 3
  • 31
  • 64
  • What are you ACTUALLY trying to do? You're probably doing it wrong. Also, it's not that [easy](https://github.com/jpetazzo/dind). – Mike Doe Mar 12 '20 at 07:30
  • 3
    I think it's clear, trying to run Docker in Docker from Alpine base image. I have the error about the docker deamon – iAmoric Mar 12 '20 at 07:32
  • That I can read by myself. But what are you ACTUALLY trying to accomplish with this. Why you need to have Docker in Docker? – Mike Doe Mar 12 '20 at 07:32
  • You mean why I need Docker-in-Docker ? It's the architecture used by my company – iAmoric Mar 12 '20 at 07:35
  • An end-to-end example, including the explanations of all of the constraints you're working under, would be really helpful here. The Docker Hub [docker](https://hub.docker.com/_/docker) image documentation (in addition to starting with to admonitions to not use that image or DinD at all) notes that starting the Docker daemon requires a `--privileged` container, which also can't be specified in a Dockerfile. If you have a working Ubuntu-based setup, I might stick with it. – David Maze Mar 12 '20 at 10:08

2 Answers2

0

I managed to do that the easy way

docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker --privileged docker:dind sh

I am using this command on my test env!

Affes Salem
  • 1,303
  • 10
  • 26
-1

You can do that, and your first idea was correct: just need to expose the docker socket (/var/run/docker.sock) to the "controlling" container. Do that like this:

host:~$ docker run \
                  -v /var/run/docker.sock:/var/run/docker.sock \  
                  <my_image>
host:~$ docker exec -u root -it <container id> /bin/sh

Now the container should have access to the socket (I am assuming here that you have already installed the necessary docker packages inside the container):

root@guest:/# docker ps -a

CONTAINER ID        IMAGE                 COMMAND                  CREATED       ...
69340bc13bb2        my_image              "/sbin/tini -- /usr/…"   8 minutes ago ...

Whether this is a good idea or not is debatable. I would suggest not doing this if there is any way to avoid it. It's a security hole that essentially throws out the window some of the main benefits of using containers: isolation and control over privilege escalation.

Z4-tier
  • 7,287
  • 3
  • 26
  • 42
  • Ok thanks! About the security issue, so you suggest to install completly docker and not re-use the socket ? Do you know how can I do that with Alpine ? I cannot connect to docker deamon – iAmoric Mar 12 '20 at 07:45
  • 1
    the whole scheme of letting a docker container have access to the docker socket is really not a good one IMO. I'm not sure what you mean by "install docker completely" though. Inside the container, all that gets installed are the packaged binaries that let you control docker via the socket that is mounted insode the container, but the actual containerization is still happening at the host level, there is no way to run "docker in docker" the same as you might be able to run a thick VM inside of a thick VM (or docker inside of a thick VM). Docker doesn't work that way. – Z4-tier Mar 12 '20 at 07:48
  • put another way, the docker socket and the whole kernel exist exclusively at the host level. But you can expose access to the hosts docker socket to a container, which can then use the docker binaries to control the docker instances running on the host system. But it's all really just the host kernel with namespace partitioning. Smoke and mirrors. – Z4-tier Mar 12 '20 at 07:53
  • Ok thanks, I knew the containerization was happening at the host level but I winder if there was a way to do it anyway. So although it's maybe a security issue I'll go for mounting the socket `-v /var/run/docker.sock:/var/run/docker.sock` at runtime. Thanks anyway – iAmoric Mar 12 '20 at 07:56
  • FWIW I've done it too. It works, just keep it in the back of your head that it's not a good deseign, and maybe don't deploy it into production :) – Z4-tier Mar 12 '20 at 07:57
  • The core security issue is that anyone who can access the Docker socket can trivially root the whole host. The best answer to this is just to accept that containers can't directly manipulate other containers and rearchitect your application to not need this. – David Maze Mar 12 '20 at 10:10
  • 1
    @DavidMaze completely agreed, although I am opposed to saying "containers can't directly manipulate other containers" because clearly they can.... It's just that doing so destroys some of the main benefits of containerization. But doing this *does* make some testing scenarios easier to implement, and those valid use cases should be accompanied by dire warnings. – Z4-tier Mar 12 '20 at 16:48