-1

Recently I started working on securing access to my S3 bucket and I have two sources which I want to grant access to and deny access to any different source.

In this case, the soruces to get the access are, my local IP or VPC IP range for example and Lambda function.

I created the following S3 bucket Policy:

{
    "Version": "2019-10-10",
    "Statement": [
        {
            "Effect": "Deny",
            "NotPrincipal": {
                "AWS": [
                    "arn:aws:iam::480311425080:role/<lambda role name>",
                    "arn:aws:sts::480311425080:assumed-role/<lambda role name>/<Lambda function name>"
                ],
                "Service": "lambda.amazonaws.com"
            },
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::<bucket name>/*",
            "Condition": {
                "NotIpAddress": {
                    "aws:SourceIp": [
                        "<ip adress>/32",
                        "<ip adress>/32"
                    ]
                }
            }
        }
    ]
}

So as you see, I'm using NotPrincipal to exclude my corresponding role and Lambda from being denied and I use NotIpaddress to exclude my valid IPs from being denied.

In this case I still can connect to S3 from my Lambda function, but also still to connect to it from "supposidly none authorized IPs". So the condition

            "Condition": {
                "NotIpAddress": {
                    "aws:SourceIp": [
                        "<ip adress>/32",
                        "<ip adress>/32"
                    ]
                }
            }

does not work as expected.

Maybe you would tell to use only the role ARN for NotPrincipal but it does not work neither.

Putting the principal as the role only "without specifying the arn with lambda function"

            "NotPrincipal": {
                "AWS": "arn:aws:iam::880719415082:role/lambda_s3_access"
            },

apply the condition of IP filtering but makes it not possible for Lambda to connect.

Any idea?

Kind regards,

Rshad

Rshad Zhran
  • 496
  • 4
  • 17
  • Can you elaborate on this: "but also still to connect to it from "supposedly none authorized IPs"? How are you connecting to it in this case? – Ashaman Kingpin Oct 11 '19 at 12:23
  • Hi @Ashaman, connecting to the bucket is done through a presigned URLs in case of talking about external IPs, and in case of Lambda, I'm connecting using `boto3` – Rshad Zhran Oct 11 '19 at 13:38
  • So what you are saying is that if you remove "arn:aws:sts::480311425080:assumed-role//" from the NotPrincipal list of ARNs and keep the NotIpAddress list, it behaves as you want, i.e. it only allows external access from those IP address but the problem is that the lambda function no longer works? And if you only add back the lambda function, then the IP filtering stops working? – Ashaman Kingpin Oct 11 '19 at 13:54
  • till now it never behaves as I want* – Rshad Zhran Oct 11 '19 at 13:58
  • And yes I mean exactly what you have just mentioned – Rshad Zhran Oct 11 '19 at 13:59

2 Answers2

0

Amazon S3 buckets are private by default. Therefore, there should be no need to use a Deny statement unless you wish to override permissions granted in another policy. (For example, Admins can access all buckets, but this bucket is an exception.)

To grant the Lambda function access to the Amazon S3 bucket:

  • Create an IAM Role with permission to access the bucket
  • Assign the IAM Role to the Lambda function

If you want an Amazon EC2 instance to also access the bucket, assign a similar role to the instance. There should, in general, be no need to assign bucket access "to a VPC". Instead, grant permission to the resources that wish to access the bucket.

To grant access to the bucket from your own (personal) IP address, create a Bucket Policy on the desired S3 bucket:

{
  "Version": "2012-10-17",
  "Id": "S3PolicyId1",
  "Statement": [
    {
      "Sid": "IPAllow",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": "arn:aws:s3:::examplebucket/*",
      "Condition": {
         "IpAddress": {"aws:SourceIp": "54.240.143.55/32"}
      } 
    } 
  ]
}
John Rotenstein
  • 241,921
  • 22
  • 380
  • 470
  • Hi John! Actually I tried with both Allow and Deny approaches, but didn't reach what I want. Please note that when accessing the S3 bucket using external IPs, in this case, I use `pre-signed` URLs which provide a public access by default. In simple words, what I need here is to restrict the access to S3 using `pre-signed` URLs from specific IP or range of IPs. – Rshad Zhran Oct 14 '19 at 08:05
  • I already checked your answer https://stackoverflow.com/a/51412946/7611357 which seems logical, but in my case, the pre-signed URL is generated in Lambda, so if I restrict the IAM role assigned to Lambda to run from a specific external IP range, then Lambda would not be able to access S3. so what I think would be a possible solution is to add Lambda to a VPC and therefore add that `VPC` IP range into the IAM role as valid IP. – Rshad Zhran Oct 14 '19 at 08:24
  • It is strange that you wish to restrict a pre-signed URL! One way of doing this is to realise that a pre-signed URL grants access based upon the credentials used to _generate_ the pre-signed URL. So, you could modify the permissions associated with those credentials to have a condition that restricts by IP address. Then, the pre-signed URL will also have the same restriction. (A pre-signed URL is simply a way for a set of credentials to say "I authorise this", but they can only authorise what they are allowed to do themselves.) Does that make any sense, or should I explain some more? – John Rotenstein Oct 14 '19 at 08:33
  • Hi John, I found it, check the answer, please. Thanks for helping. – Rshad Zhran Oct 14 '19 at 11:34
0

I found a solution to the commented issue. on the following, I comment some details.

Having a VPC ...

1. Add Lambda to the VPC `To do so, follow: [1]

2. Create the VPC endpoints for AWS S3 or any other AWS component your Lambda function would access so that Lambda can access them in VPC approach.

3. Adapt the way in which the boto S3 client is created, so that it uses path instead of connecting by virtual host toS3 " which is the default way ". This is done by:

s3 = boto3.client('s3', 'us-west-2', config=Config(s3={'addressing_style': 'path'}))

For details, check: [2]

EDIT 15/10/2019

We've just checked that and by creating an VPC endpoint for S3, Lambda is able to connect with S3 without changing the way the client is created withboto3. We continue with the old form:

s3 = boto3.resource('s3')

and then we discard:

s3 = s3 = boto3.client('s3', 'us-west-2', config=Config(s3={'addressing_style': 'path'}))

END EDIT 15/10/2019

4. Create a new policy for the IAM role that is used in Lambda and with which the presigned-urls are created. so that we restrict access to our IP ranges and to our VPC "to guarantee access for Lambda"

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": "*",
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": [
                        "******/32",
                        "******/32"
                    ]
                }
            }
        },
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "aws:sourceVpc": "vpc-******"
                }
            }
        }
    ]
}

REFERENCES:

[1] https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html

[2] https://boto3.amazonaws.com/v1/documentation/api/1.9.42/guide/s3.html#changing-the-addressing-style

Rshad Zhran
  • 496
  • 4
  • 17