31

When running code on an EC2 instance, the SDK I use to access AWS resources, automagically talks to a locally linked web server on 169.254.169.254 and gets that instances AWS credentials (access_key, secret) that are needed to talk to other AWS services.

Also there are other options, like setting the credentials in environment variables or passing them as command line args.

What is the best practice here? I really prefer to let the container access the 169.254.169.254 (by routing the requests) or even better run a proxy container that mimics the behavior of the real server at 169.254.169.254.

Is there already a solution out there?

dur
  • 15,689
  • 25
  • 79
  • 125
Ali
  • 18,665
  • 21
  • 103
  • 138

2 Answers2

20

The EC2 metadata service will usually be available from within docker (unless you use a more custom networking setup - see this answer on a similar question).

If your docker network setup prevents it from being accessed, you might use the ENV directive in your Dockerfile or pass them directly during run, but keep in mind that credentials from IAM roles are automatically rotated by AWS.

Community
  • 1
  • 1
dcro
  • 13,294
  • 4
  • 66
  • 75
  • mmm, so I guess my misunderstanding come from my experimentation with boot2docker (on mac) where networking in awkward or at least different. So basically things should just work. mmmm I need to try this. – Ali Sep 18 '14 at 14:38
  • 2
    The one issue I have with this architecture (let AWS SDK/CLI within the container hit the EC2 metadata endpoint for credentials) is that I want a fine-grain control over what permissions a container will have. One container might only be able to write to S3, whilst other I might want to not have any S3 permissions, and instead just let it publish to SNS. This design would mean that I need to add superset of permissions to EC2, and all my containers would have the same. – Daniel Gruszczyk Jan 12 '18 at 09:29
  • 1
    @DanielGruszczyk - that is where ECS comes in. You can assign a role to an ECS service – Sean Sep 06 '22 at 02:38
4

Amazon does have some mechanisms for allowing containers to access IAM roles via the SDK and either routing/forwarding requests through the ECS agent container or the host. There is way too much to copy and paste, but using --net host is the LEAST recommended option because without additionally filters that allows your container full access to anything it's host has permission to do.

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html

declare -a ENVVARS
declare AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN
get_aws_creds_local () {
   # Use this to get secrets on a non AWS host assuming you've set credentials via some mechanism in the past, and then don't pass in a profile to gitlab-runner because it doesn't see the ~/.aws/credentials file where it would look up profiles
   awsProfile=${AWS_PROFILE:-default}
   AWS_ACCESS_KEY_ID=$(aws --profile $awsProfile configure get aws_access_key_id)
   AWS_SECRET_ACCESS_KEY=$(aws --profile $awsProfile configure get aws_secret_access_key)
   AWS_SESSION_TOKEN=$(aws --profile $awsProfile configure get aws_session_token)
   
}

get_aws_creds_iam () {
  TEMP_ROLE=$(aws sts assume-role --role-arn "arn:aws:iam::123456789012:role/example-role" --role-session-name AWSCLI-Session)
  AWS_ACCESS_KEY_ID=$(echo $TEMP_ROLE | jq -r . Credentials.RoleAccessKeyID)
  AWS_SECRET_ACCESS_KEY=$(echo $TEMP_ROLE | jq -r . Credentials.RoleSecretKey)
  AWS_SESSION_TOKEN=$(echo $TEMP_ROLE | jq -r . Credentials.RoleSessionToken)
}

get_aws_creds_local

get_aws_creds_iam

ENVVARS=("AWS_ACCESS_KEY_ID=$ACCESS_KEY_ID" "AWS_SECRET_ACCESS_KEY=$ACCESS_SECRET_ACCESS_KEY" "AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN")

# passing creds into GitLab runner
 gitlab-runner exec docker stepName $(printf " --env %s" "${ENVVARS[@]}")

# using creds with a docker container
docker run -it --rm $(printf " --env %s" "${ENVVARS[@]}") amazon/aws-cli sts get-caller-identity
dragon788
  • 3,583
  • 1
  • 40
  • 49
  • can you give a small example where I should put `--net host` part? – Jananath Banuka Jul 24 '20 at 08:39
  • Anywhere after `docker run` and before the container name you are running like `ubuntu` or `redis`. – dragon788 Jul 24 '20 at 11:36
  • I am using `docker-in-docker` so in that case, does this going to work? – Jananath Banuka Jul 24 '20 at 12:14
  • These are the two links I used to set up the runner, I'll add the lines of code that enable passing in the credentials shortly. https://gist.github.com/adamstraube/b5c8eae3034f3d8d5561cfb143751d7e#gistcomment-3350289 https://adamstraube.github.io/using-gitlab-runner-locally-with-docker-in-docker-on-windows-10-and-wsl/ – dragon788 Jul 25 '20 at 18:13
  • I don't have `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY` or anything. But I only have access to one of the EC2 instances in that AWS account, so I don't have to use any of those credentials. I am not authorized to create new users or new tokens or anything. When I use `gitlab runner` `shell` executor it works fine. But the problem is with the `docker` executor now. – Jananath Banuka Jul 26 '20 at 07:24
  • If you run `aws get-caller-identity` under the shell runner it may give you an identity that you can use as the `TEMP_ROLE`. If you run the docker command from the bottom without the printf block does it show an identity. – dragon788 Jul 26 '20 at 15:39
  • 1
    @dragon788 `aws sts get-caller-identity` – Joe Bowbeer Sep 01 '21 at 06:44