0

I'm running a docker container in Fargate ECS Task. And my docker container, I have enabled ssh server, so that I can login to container directly, if I have to debug something. Which is working fine, so I can ssh my task ip, check and investigate my issues.

But, now I noticed I have an issue while accessing any AWS service via ssh inside the container, => when I logged in container via ssh I found configuration files such as ~/.aws/credentials, ~/.aws/config are missing and I can't issue any cli commands e.g. check the caller-identity. which supposed to be my task arn.

But the strange, is if I connect this same task to an ECS instance, I don't have any such issues. I can see my task arn and all rest of services. So, the ecs task agent just working fine.

So, coming back to ssh, connectivity I notice, i'm getting 404 page not found from curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI. So, how can I make this possible that ECS Instance access and ssh access have same capability? if I can access AWS_CONTAINER_CREDENTIALS_RELATIVE_URI in my ssh then I think everything will be changed.

change198
  • 1,647
  • 3
  • 21
  • 60
  • I don't have a final answer for you, but until you figure out the correct solution you could use the following "hack": Edit the entrypoint of the images and let them write the `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` environment variable to a file on the container. For example the file with all global variables. This way you will be able to access the credentials from your SSH session – trallnag Jul 11 '20 at 13:35
  • That's exactly what I did, and posted as answer. Don't know, why hidden. But thanks. – change198 Jul 11 '20 at 13:50

0 Answers0