The aws command is
aws s3 ls --endpoint-url http://s3.amazonaws.com
can I load endpoint-url from any config file instead of passing it as a parameter?
The aws command is
aws s3 ls --endpoint-url http://s3.amazonaws.com
can I load endpoint-url from any config file instead of passing it as a parameter?
This is an open bug in the AWS CLI. There's a link there to a cli plugin which might do what you need.
It's worth pointing out that if you're just connecting to standard Amazon cloud services (like S3) you don't need to specify --endpoint-url
at all. But I assume you're trying to connect to some other private service and that url in your example was just, well, an example...
alias aws='aws --endpoint-url http://website'
Here is an alternative alias to address the OP's specific need and comments above
alias aws='aws $([ -r "$SOME_CONFIG_FILE" ] && sed "s,^,--endpoint-url ," $SOME_CONFIG_FILE) '
The SOME_CONFIG_FILE
environment variable could point to a aws-endpoint-override
file containing
http://localhost:4566
Thought I'd share an alternative version of the alias
alias aws='aws ${AWS_ENDPOINT_OVERRIDE:+--endpoint-url $AWS_ENDPOINT_OVERRIDE} '
This idea I replicated from another alias I use for Terraform
alias terraform='terraform ${TF_DIR:+-chdir=$TF_DIR} '
I happen to use direnv with an /Users/darren/Workspaces/current-client/.envrc
containing
source_up
PATH_add bin
export AWS_PROFILE=saml
export AWS_REGION=eu-west-1
export TF_DIR=/Users/darren/Workspaces/current-client/infrastructure-project
...
A possible workflow for AWS-endpoint overriding could entail cd
'ing into a docker-env
directory, where /Users/darren/Workspaces/current-client/app-project/docker-env/.envrc
contains
source_up
...
export AWS_ENDPOINT_OVERRIDE=http://localhost:4566
where LocalStack is running in Docker, exposed on port 4566
.
You may not be using Docker or LocalStack, etc, so ultimately you will have to provide the AWS_ENDPOINT_OVERRIDE
environment variable via a mechanism and with an appropriate value to suit your use-case.
I bumped into this issue, here are my findings.
Context
The team is writing Python code that will run in AWS Lambda.
Situation
The team wants to run the solution locally as deploying code changes in AWS means long feedback loops.
We decided to use LocalStack and Serverless to shorten the feedback loops.
But! Some automation is required to have a local development environment with all dependencies, including the ones above.
Problem
The LocalStack documentation suggests the Lambda code to contain the following:
...
if 'LOCALSTACK_HOSTNAME' in os.environ:
dynamodb_endpoint = 'http://%s:4566' % os.environ['LOCALSTACK_HOSTNAME']
dynamodb = boto3.resource('dynamodb', endpoint_url=dynamodb_endpoint)
else:
dynamodb = boto3.resource('dynamodb')
...
There's a better way. Think about it, Lambda code should contain "business logic" only, it should be environment-agnostic, and also tooling-agnostic.
Solution
As boto3
is smart enough to load the awscli
config and credentials files, AWS released a change that allows developers define an endpoint_url
in the ~/.aws/config
file. As documented here
In our case, the automation code will make sure the local environment has a config file with the following contents
[profile localstack]
output = json
region = eu-west-2
endpoint_url = http://localhost:4566
This endpoint_url
value is the first bit of the output from the command:
awslocal sqs create-queue --queue-name local-in-queue
So this is how my code using sqs
looks like:
sqs_client = boto3.resource("sqs")
No need to define the endpoint_url
argument. No need to define the AWS_PROFILE
or the AWS_REGION
OS environment variables. When boto3
loads within my local environment the "fake" Lambda has everything it needs to run fine.