0

We have two load balancers running HAProxy which serve requests to an application server. We've recently been getting spikes of traffic from script kiddies running vulnerability checks against our infrastructure. It's just a bunch of GET requests for files that don't exist (/phpMyAdmin). They only result in 15-20 requests/sec for our application server, but CPU load spikes to 100%. What's interesting is that we normally have 10-15 requests/sec, which we have no problem with. So, I'm a bit confused as to how these requests are causing so much damage.

The load balancers happily pass the traffic along, but the application server chokes and load and CPU usage skyrocket. We're hoping for some advice -- should we start looking at the Apache config, or could there something unique about these requests that we can block at the load balancer level? I find it odd that our normal requests come in at almost the same rate, but they don't cause any additional CPU load.

Any help is appreciated.

  • 2
    Is the server having to do anything extra in response to the requests? E.g. generate a dynamic 404 page? Is Apache actually the top CPU user? Does the CPU load appear to be I/O bound or is it indeed a CPU bottleneck? – Garrett Dec 14 '11 at 23:03
  • Thanks for the question. Actually, since we are running a CMS, the invalid requests are getting a 302 redirect to a custom 404 page served by the CMS. Apache is indeed the top CPU user during the spikes. – Christopher Armstrong Dec 14 '11 at 23:05
  • I'm thinking rate-limiting at the HAProxy level may not be a bad idea... – Christopher Armstrong Dec 14 '11 at 23:05
  • If your system can't handle 15-20 requests/sec rate limiting isn't your solution: You're choking on relatively low traffic, and you're only going to piss off users by slowing them down even more. You need to attack the root cause. – voretaq7 Dec 14 '11 at 23:26
  • Thanks for your input, everyone. After looking at the logs once again, it appears that our apache **connections** per second does indeed spike considerably, to over 100. However, apache **requests** only spikes to 15-20. – Christopher Armstrong Dec 15 '11 at 15:05

1 Answers1

2

Sometimes quantity doesn't matter as much as quality -- From what you describe your custom 404 page is apparently a bear for your system to produce, to the point that even a relatively low number of those pages being generated kills your environment.

Your choices in a situation like this are to add enough servers to handle that load (expen$ive), or simplify the custom 404 page so generating it is less painful -- maybe even make it static instead of having the CMS generate it (much more cost effective).

voretaq7
  • 79,879
  • 17
  • 130
  • 214