1

The code below fails in row s3 = boto3.client('s3') returning error botocore.exceptions.InvalidConfigError: The source profile "default" must have credentials.

def connect_s3_boto3():
    try:
        os.environ["AWS_PROFILE"] = "a"
        s3 = boto3.client('s3')
        return s3
    except:
        raise

I have set up the key and secret using aws configure enter image description here

My file vim ~/.aws/credentials looks like:

[default]
aws_access_key_id = XXXXXXXXXXXXXXXXX
aws_secret_access_key = YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY

My file vim ~/.aws/config looks like:

[default]
region = eu-west-1
output = json

[profile b]
region=eu-west-1
role_arn=arn:aws:iam::XX
source_profile=default

[profile a]
region=eu-west-1
role_arn=arn:aws:iam::YY
source_profile=default

[profile d]
region=eu-west-1
role_arn=arn:aws:iam::EE
source_profile=default

If I run aws-vault exec --no-session --debug a it returns:

aws-vault: error: exec: Failed to get credentials for a9e: InvalidClientTokenId: The security token included in the request is invalid. status code: 403, request id: 7087ea72-32c5-4b0a-a20e-fd2da9c3c747

mrc
  • 2,845
  • 8
  • 39
  • 73
  • What happens when you remove the line: os.environ["AWS_PROFILE"] = "a"? – James Shapiro May 26 '20 at 09:42
  • Does the AWS CLI work from the same system? – John Rotenstein May 26 '20 at 09:43
  • Shouldn't there be matching `[a]` in `~/.aws/credentials` as well? – Marcin May 26 '20 at 09:50
  • @JamesShapiro it returns botocore.exceptions.NoCredentialsError: Unable to locate credentials – mrc May 26 '20 at 10:04
  • Let's start with something simple. Can you use the AWS CLI with the `[default]` profile? For example: `aws s3 ls` (with nothing in the `AWS_PROFILE` environment variable). – John Rotenstein May 26 '20 at 10:58
  • @JohnRotenstein yes, I can do it, and I see the content – mrc May 26 '20 at 11:54
  • Excellent. This means that the credentials file is setup correctly. Next, from the same system that you just used with `aws s3 ls`, try running a Python program to list the buckets (_without_ changing the profile). So, it should use `s3 = boto3.client('s3')` and then `response = s3.list_buckets()`. That should return a list of buckets. – John Rotenstein May 26 '20 at 22:46
  • @JohnRotenstein yes it works. It returns a list of objects with name and creationDate – mrc May 27 '20 at 07:37
  • I'm a little confused about what you are trying to accomplish (eg I don't know why you included `aws-vault` output), but it looks like you are wanting to access S3 resources via role. You can follow advice from [Credentials — Boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html) and add lines like: `session = boto3.Session(profile_name='a')` and `s3_client = session.client('s3')`. This should then let you use that `s3_client` to access resources through the role. – John Rotenstein May 27 '20 at 07:56
  • I run all this script in a docker container. If I run it in a conda environment it works, because it uses the aws configure that i set up in local. Now that I have moved it to a docker it doesn't work, despite I run all the statements u tell me in the docker through bash. I also tried to do aws configure and add the same working key and secret that I have in local. – mrc May 27 '20 at 09:58

4 Answers4

1

I ran into this problem while trying to assume a role on an ECS container. It turned out that in such cases, instead of source_profile, credential_source should be used. It takes the value of EcsContainer for the container, Ec2InstanceMetadata for the EC2 machine or Environment for other cases.

Since the solution is not very intuitive, I thought it might save someone the trouble despite the age of this question.

  • Just had the same problem on Fargate. `credential_source = EcsContainer` saved my day! – Zach Apr 20 '23 at 14:15
0

I noticed you tagged this question with "docker". Is it possible that you're running your code from a Docker container that does not have your AWS credentials in it?

James Shapiro
  • 4,805
  • 3
  • 31
  • 46
  • Yes, I run it from a docker container but I do `docker exec -it name bash` so every statement I run is inside the docker – mrc May 27 '20 at 07:38
  • Yeah, but it looks like your credentials are not accessible from your container. What happens when you run "aws s3 ls" from inside of your Docker container? – James Shapiro May 27 '20 at 11:12
  • it says credentials are not there, but if i run aws configure then it works. If after that I retry it, it still failing... – mrc May 27 '20 at 11:49
  • It should automatically work as in local as I understand from that: https://stackoverflow.com/questions/22409367/fetching-aws-instance-metadata-from-within-docker-container/22411611#22411611 – mrc May 27 '20 at 11:50
  • It sounds like when you "retry" it, you are clearing away your credentials, so every time you retry it it will fail. The question you linked to does not seem relevant here. – James Shapiro May 27 '20 at 17:22
0

Use a docker volume to pass your credential files into the container: https://docs.docker.com/storage/volumes/

It is not a good idea to add credentials into a container image because anybody who uses this image will have and use your credentials. This is considered a bad practice.

For more information how to properly deal with secrets see https://docs.docker.com/engine/swarm/secrets/

-3

Finally the issue is that Docker didn't had the credentials. And despite connect through bash and add them, it didn't work.

So, in the dockerfile I added:

ADD myfolder/aws/credentials /root/.aws/credentials

To move my locahost credentials files added through aws cli using aws configure to the docker. Then, I build the docker again and it works.

mrc
  • 2,845
  • 8
  • 39
  • 73