3

I've been reading the Cloudfront docs and I want to make sure that my plan is reasonable. I have a backend API structured as an EC2 HTTP server with frequently updating content (several changes per second). This is my understanding:

  • I shouldn't expose this HTTP server directly to clients because that makes the EC2 server vulnerable to DDOS attacks
  • Creating a layer of indirection with CloudFront edge locations helps defend against DDOS because AWS can deploy a firewall at the outside of the network rather than right around my EC2 instance
  • By setting Maximum TTL = 0, I ensure that Cloudfront is merely an indirection layer and doesn't try to do any actual caching so that users always get up-to-date information.

Are these assumptions correct / does my plan sound reasonable? It seems from reading online that this is a nonstandard use of Cloudfront.

rampatowl
  • 1,722
  • 1
  • 17
  • 38

1 Answers1

2

This is a perfectly reasonable plan.

It isn't the primary use case for which AWS markets CloudFront (as a CDN), but one can hardly argue that the practice isn't within the design scope of the product.

Amazon CloudFront accepts expiration periods as short as 0 seconds (in which case Amazon CloudFront will revalidate each viewer request with the origin). Amazon CloudFront also honors special cache control directives such as private, no-store, etc.; these are often useful when delivering dynamic content that may not be cached at the edge.

https://aws.amazon.com/cloudfront/dynamic-content/

Of course, given enough traffic, there's some level that will still be enough to overload your server, but, yes, this is a solid strategy.

Under the hood, API Gateway Edge-Optimized endpoints and the S3 Transfer Acceleration feature both use CloudFront with caching completely disabled. In both cases, you can't see CloudFront distributions in your console that correlate to these services, but this is how they work.

Michael - sqlbot
  • 169,571
  • 25
  • 353
  • 427