0

I have this pod definition:

apiVersion: v1
kind: Pod
spec:
  securityContext:
    runAsUser: 1000
    runAsGroup: 1000
    fsGroup: 998 # Group ID of docker group on the node

  containers:
    - name: someconainter
      image: someimage
      resources:
        requests:
          memory: "8G"
          cpu: "6"
        limits:
          memory: "10G"
          cpu: "8"
      imagePullPolicy: Always
      tty: true
      command:
        - cat
      volumeMounts: 
        - mountPath: /var/run 
          name: docker-sock 
  volumes:
    - name: docker-sock 
      hostPath: 
          path: /var/run 

In which I've mounted the Docker socket from the node to the pod, as you can see.

Then from within the pod I try to do docker run command like this:

docker run --rm -v ${helmChartFolder}:/chart:ro ubuntu ls -lah

Then the mounted /chart folder is completely empty although on the pod itself it's not

So what could be the reason for this, I kept trying tweaking the -v args and even tried --mount with no luck so far.

Wazery
  • 15,394
  • 19
  • 63
  • 95
  • 2
    Kubernetes nodes aren't necessarily running Docker daemons, and since you can pretty trivially use Docker to root the host, it's a huge security exposure. Kubernetes's environment is also complex enough that mixing Docker and Kubernetes containers won't necessarily work well, if it's even possible. Can you use the Kubernetes API instead? Do you _need_ runtime access to a container system? – David Maze Sep 29 '21 at 14:50

1 Answers1

2

When you do a bind mount on a docker container, the host side of the path is always from the point of the docker daemon. Assuming your ${helmChartFolder} is a path inside this pod, then a blank directory in the container would suggest that your host doesn't have anything located at that location.

If you want to share the data in that folder to a docker container, then your best bet will be to copy it.

Here's one of many ways to achieve that:

docker create --name mycontainer -v /chart -w /chart ubuntu ls -lah
tar -C ${helmChartFolder} -c . | docker cp - mycontainer:/chart
docker start -a mycontainer
docker rm -v mycontainer

This creates a container with an anonymous volume at /chart (optional to use the anonymous volume). The 'docker cp' copies the contents of the directory in question to /chart (I do the tar pipe trick to be very explicit about how I want all the contents of the directory to end up in /chart directly). Finally, I start the container attached and I see the output run showing the contents of /chart. The -v option for docker rm says to clean up anonymous volumes explicitly so I end up with a clean state.

This approach isn't really specific to running a docker client in a kubernetes pod, but it has more to do with running a docker client against a dockerd that's effectively remote. The dockerd isn't running inside that pod, so it has its own filesystem and environment. The socket hostPath is just one way to get access to a dockerd that's somewhere else.

I use a similar 'docker cp' type setup in an older CICD system where the cicd job runs the 'docker' cli against a remote dockerd instance. I can copy files to or from the stopped container which is handy for getting the container set up before it runs, or for grabbing build artifacts after it completes.

programmerq
  • 6,262
  • 25
  • 40