1

I've been wondering, is there a technology that relays (slows down) responses to a given IP according to the rate It makes requests?

E.g I have an apache server with a "heavy" API service that I want to limit to 1 request/2 seconds/IP if the server is not 100% loaded or a "fair usage" policy if the server is fully loaded. Optionally I would like to promote specific calls with specific ids e.g. /req.php?id=157 with a "bonus rate" of e.g. 10 req/second.

Also if someone exceeds e.g. 100 requests/hour he will be prompted with an error response prompting him to upgrade etc.

For me this sounds like a common requirement in many systems and I would expect there to be some relevant frameworks. Are you aware of any in php,python,java or even as an apache module?

Tom O'Connor
  • 27,480
  • 10
  • 73
  • 148
neverlastn
  • 113
  • 3
  • possible duplicate of [Apache rate limiting options](http://stackoverflow.com/questions/131681/apache-rate-limiting-options) – sarnold Jul 07 '11 at 08:44

3 Answers3

0

Slowing down the request rates is what DDOS are used for.

Don't help DDOSers to reach their aim.

zerkms
  • 431
  • 2
  • 5
  • 17
  • He is really asking for how he can configure SLA management in apache. –  Jul 07 '11 at 05:46
  • 1
    @Soren: it cannot be applied on application level as a protection from DDOS. Since the slower he responds - the more effective the DDOS is. – zerkms Jul 07 '11 at 05:48
  • Not true -- the way he describes it, he want to slow down traffic from IPs which issues lots of request, leaving bandwidth to users from other IPs -- however that may create other networking problems which may have to be dealt with elsewhere. –  Jul 07 '11 at 05:52
  • @Soren: the slower the connection is - the longer the process runs. Memory is limited resource. So only limiter number of processes can run simultaneously. – zerkms Jul 07 '11 at 06:04
  • @Soren, @zerkms "Memory is limited resource" I've been thinking about this and indeed is true with the apache architecture.If I show a twitter-like "server is busy-try later" message instead of sleeping() the memory problem gets solved. What I was thinking is memcaching the Requests/IP (RIP(user)) and the Total Server Requests (TSR) and applying a formula like: DELAY(sec) = 3600 / ((LIM-TSR)+1) * 0.3 / (1- (RIP(user)/TT) where TT is user quota and LIM is the maximum server requests the system can provide. This delay can be used either as a delay or as a probability to reject a request. – neverlastn Jul 07 '11 at 20:29
0

This question/answer have the information

https://stackoverflow.com/questions/131681/apache-rate-limiting-options

Soren
  • 134
  • 4
  • Very nice post. Very relevant despite some dead links. I'm not sure if any of those modules really gives the solution with the granularity I need. – neverlastn Jul 07 '11 at 20:36
  • 1
    This also looks relevant: http://aicoder.blogspot.com/2009/07/hacking-apaches-modproxyhttp-to-enforce.html – neverlastn Jul 07 '11 at 20:40
0

Doing this on Apache level is not only impractical but introduces a huge overhead. If you need to rate limit per IP the best way is to do it within OS's firewall (IPFW in FreeBSD for example).

Not only will it be more flexible in the long run, since it's running on system level the filtering is done at lighting fast speeds.

In regards to actual API implementation of this, you should be handling this within your API application not Apache. Record requests to a fast medium like Memcache and have cron retrieve and store the data within database for processing. When user XYZ reached threshold, simply impose it within the handshake or next request.

Aleksey Korzun
  • 276
  • 1
  • 4