I'm trying to run an AWS Batch job, but it is failing when calling aws-cli
to copy data from s3 into the container. The error message is as follows:
fatal error: Unable to locate credentials
My job definition has an execution role with two managed policies: AmazonS3FullAccess
and AmazonECSTaskExecutionRolePolicy
. The container image is built from the default ubuntu:22.04
image, and has an entry point file similar to:
#!/bin/bash
set -ex
aws s3 cp ...
I've also been reading the following question: ECS Fargate task not applying role, which states that the container should have a variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
, but I don't have that. I've added a declare -x
in my entry point, and this is its output when I execute the Batch job:
declare -x AWS_BATCH_CE_NAME="MyCluster"
declare -x AWS_BATCH_JOB_ATTEMPT="1"
declare -x AWS_BATCH_JOB_ID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
declare -x AWS_BATCH_JQ_NAME="MyQueue"
declare -x AWS_DEFAULT_REGION="us-west-2"
declare -x AWS_EXECUTION_ENV="AWS_ECS_FARGATE"
declare -x AWS_REGION="us-west-2"
declare -x DEBIAN_FRONTEND="noninteractive"
declare -x ECS_CONTAINER_METADATA_URI="http://111.111.111.1/v3/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxx"
declare -x ECS_CONTAINER_METADATA_URI_V4="http://111.111.111.1/v4/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxx"
declare -x HOME="/root"
declare -x HOSTNAME="ip-111-11-1-111.us-west-2.compute.internal"
declare -x OLDPWD
declare -x PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
declare -x PWD="/"
declare -x SHLVL="1"
Also, when setting up a Fargate cluster, I can see that a Task Definition has a "Task Role" in addition to the execution role. My understanding is that the "Task Role" is a role defined inside the container, while the execution role is defined to setup the container. In Batch, there is no such "Task Role". So, my question is, how can I authorize my container to access my AWS resources with aws-cli
inside the container?