0

I am looking to deploy a Python Flask app on an AWS EC2 (Ubuntu 20.04) instance. The app fetches data from an S3 bucket (in the same region as the EC2 instance) and performs some data processing.

I prefer using s3fs to achieve the connection to my S3 bucket. However, I am unsure if this will allow me to leverage the 'free data transfer' from S3 to EC2 in the same region - or if I must use boto directly to facilitate this transfer?

My app works when deployed with s3fs, but I would have expected the data transfer to be much faster - so I am wondering that perhaps AWS EC2 is not able to "correctly" fetch data using s3fs from S3.

Andrew Gaul
  • 2,296
  • 1
  • 12
  • 19
mfcss
  • 1,039
  • 1
  • 9
  • 25

1 Answers1

0

All communication between Amazon EC2 and Amazon S3 in the same region will not incur a Data Transfer fee. It does not matter which library you are using.

In fact, communication between any AWS services in the same region will not incur Data Transfer fees.

John Rotenstein
  • 241,921
  • 22
  • 380
  • 470
  • Unless your EC2 resides in a public subnet and retrieves the data from S3 through its public IP address. Then you do get charged for data transfer even if your EC2 and S3 buckets are in the same region. You can discover this cost item in [Cost Explorer](https://aws.amazon.com/de/aws-cost-management/aws-cost-explorer/). Group by _API Operation_ and look for cost item called _PublicIP-In_. But this is irrespective of the library, of course. – Henrik Koberg Jul 08 '22 at 08:45
  • That sounds like a fee related to traffic going in & out of the VPC, which is not specifically limited to traffic to/from S3. – John Rotenstein Jul 08 '22 at 09:58
  • I agree, but the answer mentions data transfer costs caused by crossing regions, which isn't specifically limited to S3 either. – Henrik Koberg Jul 08 '22 at 18:48