1

The AWS CLI v2 documentation presents an option and guide to installing / configuring the cli via docker. The guide is straightforward enough to follow, and the container works fine with the key items being

  • mounting the local .aws directory to provide credentials to the container
  • mounting $pwd for any I/O work required

I'm using it for s3 and realized that any files I copy to my local drive from s3 show as owned by root.

>docker run --rm -v "$HOME/.aws:/root/.aws:rw" -v "$PWD:/aws:rw" amazon/aws-cli s3 cp s3://xxx/hello .
download: s3://xxx/hello to ./hello
>ls -l
total 0
-rw-r--r-- 1 root root 0 Oct  2 09:43 hello

This makes sense, as the process is running as root in the container, but isn't ideal. There isn't a any other user in the container, so I can't just run "as" kirk.

>docker run --rm -u kirk -v "$HOME/.aws:/root/.aws:rw" -v "$PWD:/aws:rw" amazon/aws-cli s3 cp s3://xxx/hello .
docker: Error response from daemon: unable to find user kirk: no matching entries in passwd file.

Is there a way to mount the volume "as" a user or by delegating user access to the container? I don't care (& not sure I can control) the user inside the container, but I would like the process to run in the context of a user on the host system. What's the right approach here?

Kirk Broadhurst
  • 27,836
  • 16
  • 104
  • 169

1 Answers1

0

You can run a container as a user that doesn't exist inside the image using numerical values for -u ${UID}:${GID}. For example:

docker run --rm \
    -u 1000:1000 \
    -e AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} \
    -e AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} \
    -v ${PWD}:/aws:rw \
    amazon/aws-cli s3 cp s3://devops-example/lolz.gif .

... will copy the file as UID 1000 GID 1000.

Note: using AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables for passing credentials instead of mounting the credentials file. The full list of environment variables is available here.

AWS CLI copy as UID:GID 1000

masseyb
  • 3,745
  • 1
  • 17
  • 29