I am receiving "Could not connect to the endpoint URL: "https://s3.amazonaws.com/" from inside EC2 instance running inside private subnet
Note: We are using our corporate shared AWS account instead of Federated account for this exercise.
Here is a configuration:
Created one VPC with 1 private(Attached to VPC endpoints for S3 and Dynamodb) and 1 public (attached to Internet Gateway) subnet. There is no NAT gateway or instance.
Launched EC2 instance(Amazon Linux AMI) one inside each subnet. Attached IAM roles to access dynamodb and S3 to both the EC2 instance
Connected to EC2 from terminal. Configured my access keys using
aws configure
Policy for S3 VPC endpoint:
"Statement": [
{
"Action": "*",
"Effect": "Allow",
"Resource": "*",
"Principal": "*"
}
]
}
Routing is automatically added to the VPC routing where destination is pl-xxxxxxxx(com.amazonaws.us-east-1.s3) and target is the endpoint created in
- Opened all traffic in the outbound rules in Security Group for the private subnet to destination prefix s3 endpoint starting with pl-xxxxxxxx Now entered following command in private ec2 instance on terminal
aws s3 ls --debug --region us-west-2
I got following error
"ConnectionTimeout: Could not connect to the endpoint URL https://sts.us-west-2.amazonaws.com:443"
I read almost all the resources on google and they follow same steps that I have been following but it is not working out for me. The only difference is that they are using federated AWS account whereas I am using a shared AWS account. Same goes for dynamodb access.
Similar stackoverflow issue: Connecting to S3 bucket thru S3 VPC Endpoint inside EC2 instance timing out
But I could not benefit from it much. Thanks a lot in advance.
Update: I was able to resolve the issue with STS endpoint by creating STS interface endpoint in the private subnet and then accessing the Dynamodb and S3 by assuming role inside the EC2 instance