0

Background: I'm building a web application using Amazon API Gateway, Amazon S3, AWS Lambda and so on.

Note: If you don't know about AWS, any pieces of advice would be highly appreciated.

Searching how to protect API Gateway from DDoS attacks, I've found some keywords like AWS Shield, AWS WAF and so on. Anyways aside from those, I've hit upon an idea.But googling the idea, the search does not hit any, so I can not be sure if the idea is correct.
The idea is something like the below.

Authenticated users get endpoints dynamically which means there is an endpoint to get endpoints to access resources. Now some endpoint gets down because of DDoS attacks and users get 503 error but users automatically get a backup endpoint by "the endpoint to get endpoints" because I write the frontend code like so and create some copied backup endpoints in Amazon API Gateway.

I would like to know if this would work fine.

Nigiri
  • 103
  • 2

2 Answers2

1

If you are worried about endpoint behind the API GW , then GW instance can be configured to add per-user limit, so authenticated user cannot run more requests than you allow. You can add parameters check so malformed requests won't hit your backend.

Also, API GW is a fault-tolerant and highly available service so you cannot bring it down (but can run over budget) thus the GW endpoint (as it visible from the World, like d123456.cloudfront.net) won't get down.

Putnik
  • 2,217
  • 4
  • 27
  • 43
  • hi @Putnik, thank you for your answer. Per-user limit you said means per-IP address or something? I can see "Default Method Throttling" in API GW configuration. Is this what you said? – Nigiri Sep 20 '17 at 02:16
  • per-user means per-key. As far as it is an authentication method, each user has to provide a key to be served. "Default Method Throttling" is per stage (all users in total, including not authorized if you allow them). – Putnik Sep 20 '17 at 14:14
0

Sounds like you're describing a CDN (Content Delivery Network). This is essetially a read-only copy of your static site that serves as your new frontend. The important aspect of this is that a CDN frontend doesn't have any more server side code, as that frontend is generated with a browser scrape and then re-presented after interpretation, and as such can be hosted in S3 where an EC2 instance with a web server was previously required - allowing you to better scale and control conditions under DDoS attacks. This is effective for static sites, but clearly won't work for dynamic applications.

If you're running dynamic apps and need to prevent these sorts of attacks, WAF is highly effective and has rules that are flexible enough to limit just about any kind of traffic. As you see these attacks happen, WAF will allow you to adequately adapt without having to spin up expensive solutions such as an F5 ASM for example. While the use of the API gateway provides a highly fault an over-capacity tolerant solution to your problem, an attack can really mess up a bill. The API gateway has rules that will allow you to keep this over-use from happening, and combined with WAF will allow you to lock that biusiness down.

Finally, you might consider edge network caching. Edge cache servers will allow you to have this "endpoints of endpoints" via globally distributed frontend caching servers. When your origin server goes down, your caches will keep your sites and potentially applications alive as they were before the downtime window. There are a few products that will do this, most notably CloudFront or Akamai.

Between all of these solutions, you should have a large amount of resistance to DOS attacks of most kinds, with the ability to adapt to new kinds later on.

Spooler
  • 7,046
  • 18
  • 29