The AWS CLI v2 documentation presents an option and guide to installing / configuring the cli via docker. The guide is straightforward enough to follow, and the container works fine with the key items being
- mounting the local
.aws
directory to provide credentials to the container - mounting
$pwd
for any I/O work required
I'm using it for s3 and realized that any files I copy to my local drive from s3 show as owned by root
.
>docker run --rm -v "$HOME/.aws:/root/.aws:rw" -v "$PWD:/aws:rw" amazon/aws-cli s3 cp s3://xxx/hello .
download: s3://xxx/hello to ./hello
>ls -l
total 0
-rw-r--r-- 1 root root 0 Oct 2 09:43 hello
This makes sense, as the process is running as root
in the container, but isn't ideal. There isn't a any other user in the container, so I can't just run "as" kirk
.
>docker run --rm -u kirk -v "$HOME/.aws:/root/.aws:rw" -v "$PWD:/aws:rw" amazon/aws-cli s3 cp s3://xxx/hello .
docker: Error response from daemon: unable to find user kirk: no matching entries in passwd file.
Is there a way to mount the volume "as" a user or by delegating user access to the container? I don't care (& not sure I can control) the user inside the container, but I would like the process to run in the context of a user on the host system. What's the right approach here?