49

The aws command is

aws s3 ls --endpoint-url http://s3.amazonaws.com

can I load endpoint-url from any config file instead of passing it as a parameter?

David Parks
  • 30,789
  • 47
  • 185
  • 328

4 Answers4

40

This is an open bug in the AWS CLI. There's a link there to a cli plugin which might do what you need.

It's worth pointing out that if you're just connecting to standard Amazon cloud services (like S3) you don't need to specify --endpoint-url at all. But I assume you're trying to connect to some other private service and that url in your example was just, well, an example...

Stephen
  • 1,657
  • 13
  • 10
  • Yes I am trying to connect to a private service – Siddhivinayak Shanbhag Sep 25 '18 at 10:01
  • 2
    Thanks for linking to that bug report. I was happy to find that after 8 years, they finally fixed it....yesterday! Use `export AWS_ENDPOINT_URL=http://localhost:5000` with aws-cli >=1.23.0: * https://github.com/aws/aws-cli/issues/1270#issuecomment-1626070761 * https://docs.aws.amazon.com/sdkref/latest/guide/feature-ss-endpoints.html – richardw Jul 08 '23 at 09:31
33
alias aws='aws --endpoint-url http://website'
F.M
  • 479
  • 4
  • 6
  • 1
    I appreciate your suggestion but seems like alias is hardcoding aws cli command instead I need a --endpoint-url option to be loaded from configuration file. – Siddhivinayak Shanbhag Apr 08 '19 at 04:00
  • 1
    for some reason that didn't work for me, so I used a function: `function aws() { /usr/local/bin/aws --endpoint foo "${@}" }` then later `unset -f aws` – Neil McGuigan Mar 07 '21 at 19:39
  • This command must be written in `~/.bash_aliases` or `~/.bashrc` – Jess Chen May 29 '21 at 11:45
  • That is actually a smooth solution for test setups inside containers since the only actor using this alias, is the test runner itself. – chronicc Oct 08 '21 at 09:01
3

Updated Answer

Here is an alternative alias to address the OP's specific need and comments above

alias aws='aws $([ -r "$SOME_CONFIG_FILE" ] && sed "s,^,--endpoint-url ," $SOME_CONFIG_FILE) '

The SOME_CONFIG_FILE environment variable could point to a aws-endpoint-override file containing

http://localhost:4566

Original Answer

Thought I'd share an alternative version of the alias

alias aws='aws ${AWS_ENDPOINT_OVERRIDE:+--endpoint-url $AWS_ENDPOINT_OVERRIDE} '

This idea I replicated from another alias I use for Terraform

alias terraform='terraform ${TF_DIR:+-chdir=$TF_DIR} '

I happen to use direnv with an /Users/darren/Workspaces/current-client/.envrc containing

source_up

PATH_add bin

export AWS_PROFILE=saml
export AWS_REGION=eu-west-1

export TF_DIR=/Users/darren/Workspaces/current-client/infrastructure-project

...

A possible workflow for AWS-endpoint overriding could entail cd'ing into a docker-env directory, where /Users/darren/Workspaces/current-client/app-project/docker-env/.envrc contains

source_up

...

export AWS_ENDPOINT_OVERRIDE=http://localhost:4566

where LocalStack is running in Docker, exposed on port 4566.

You may not be using Docker or LocalStack, etc, so ultimately you will have to provide the AWS_ENDPOINT_OVERRIDE environment variable via a mechanism and with an appropriate value to suit your use-case.

Darren Bishop
  • 2,379
  • 23
  • 20
  • Note that, within `.envrc`, you can define the value for `AWS_ENDPOINT_OVERRIDE` using other environment variables or any means supported by Bash; the benefit here is that the updates to the value are triggered by `cd`'ing between directories and can take values derived (or not) from the directory `cd`'ed to – Darren Bishop May 28 '22 at 11:08
1

I bumped into this issue, here are my findings.

Context

The team is writing Python code that will run in AWS Lambda.

Situation

The team wants to run the solution locally as deploying code changes in AWS means long feedback loops.

We decided to use LocalStack and Serverless to shorten the feedback loops.

But! Some automation is required to have a local development environment with all dependencies, including the ones above.

Problem

The LocalStack documentation suggests the Lambda code to contain the following:

...
if 'LOCALSTACK_HOSTNAME' in os.environ:
  dynamodb_endpoint = 'http://%s:4566' % os.environ['LOCALSTACK_HOSTNAME']
  dynamodb = boto3.resource('dynamodb', endpoint_url=dynamodb_endpoint)
else:
  dynamodb = boto3.resource('dynamodb')
...

There's a better way. Think about it, Lambda code should contain "business logic" only, it should be environment-agnostic, and also tooling-agnostic.

Solution

As boto3 is smart enough to load the awscli config and credentials files, AWS released a change that allows developers define an endpoint_url in the ~/.aws/config file. As documented here

In our case, the automation code will make sure the local environment has a config file with the following contents

[profile localstack]                                                                                                      
output = json
region = eu-west-2
endpoint_url = http://localhost:4566

This endpoint_url value is the first bit of the output from the command:

awslocal sqs create-queue --queue-name local-in-queue

So this is how my code using sqs looks like:

sqs_client = boto3.resource("sqs")

No need to define the endpoint_url argument. No need to define the AWS_PROFILE or the AWS_REGION OS environment variables. When boto3 loads within my local environment the "fake" Lambda has everything it needs to run fine.

raulra08
  • 191
  • 2
  • 5