1

I have a VPS running Ngix and currently hosting a few websites. As you know VPS have low resource and the security measures should be done by client.

I just noticed many stress tools are out there which can cause a webserver crash or the server eats full resources which can end up hanging. I have got LoadUI in my windows pc. There are even online similar services too, like LoadImpact.com

It doesn't even need to run 10 or thousands tools at the same time, Even just a kid can enter the domain name in these tools and run the test with tons of concurrent connections and make full use of server bandwidth, hardware resources,etc..

I want to know How should I prevent these flooding attacks ? Is it something should be handled by Iptables ? Or Nginx ?

xperator
  • 457
  • 2
  • 12
  • 24

2 Answers2

3

That you are already running nginx is a good start - event based servers are much more resilient against sloloris type attacks.

Still it's a good idea to prevent DOS attacks as far awayas possible from your application. The next step is iptables.

You need to think about how you clssify attacks and differentiate them from real traffic - the speed at which new conections are being created is a very good indicator - and you can configure iptables to limit new connections on a per ip basis:

iptables -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent --set
iptables -A INPUT -p tcp --dport 80 -i eth0 -m state --state NEW -m recent \
         --update --seconds 30 --hitcount 80  -j DROP

(drops new connection requests when the rate rises above 80 each 30 seconds)

You can limit the number of concurrent connections per ip address:

iptables -A INPUT -p tcp --syn --dport 80 -m connlimit \
      --connlimit-above 20 -j REJECT --reject-with tcp-reset

It's also a good idea to limit bandwidth, depending on your traffic profile to, say10% of the available bandwidth - this is done using tc rather than iptables.

Then, for the connections which get through, there may be characteristics in the HTTP request which would identify an attack (referrer, URL requested, user-agent, accept-language....) it doesn't matter what specific values you pick for these just now - you just need to esure that you've got the machinery in place where you can quickly change the parameters at the first sign of an attack. While you could handle the request on the webserver, a better solution is to block access from the remote IP address using iptables - fail2ban is the tool for bridging your log data to your iptables config.

Of course for a large scale DDOS this isn't going to solve the problem of the attackers stuffing your internet pipe with packets your server ignores - for that you need to speak to your upstream provider.

symcbean
  • 21,009
  • 1
  • 31
  • 52
  • Thanks. Very well explained. But I get this error when running your second rule : `xt_recent: hitcount (80) is larger than packets to be remembered (20)` – xperator May 10 '12 at 22:32
  • I thought changing `/sys/module/xt_recent/parameters/ip_pkt_list_tot` permission to 644 and value to 80 would fixed it. But the I lost the connection to server in a few second after doing that. Not sure why. I think I messed up with this module. – xperator May 10 '12 at 23:03
  • Welcome to the joys of messing with IPTables while connected via SSH. I'm pretty sure we've all done this at least once. For me it was the discovery that `iptables -F` flushes the rules but leaves the policies set to `DROP`. Unless you have out-of-band access, call your hosting provider. – Ladadadada May 11 '12 at 05:31
  • Certainly it's a good idea to have the first rule in the chain to accept established/related packets (doesn't solve the flush problem, but does prevent a lot of other surprises) – symcbean May 11 '12 at 11:55
1

2 thing that I would recommend looking into is iptables rate limiting and fail2ban. Fail2ban will give you some decent auto blocking of IPs that are hitting your server too much and allow you to customize how long you want them to be banned. Iptables rate limiting will allow you to throttle all types of traffic coming into your server. I found a decent article about it here. However if you do a basic Google search you will see a lot more.

Edit: While I have no personal experience with nginx, I do see that it has an HttpLimitReqModule that you should look into as well.

Eric
  • 1,383
  • 3
  • 17
  • 34
  • I used a few Iptables rules a few month ago to get a similar approach on preventing flooding, but It was affect web browsing performance a lot. Not sure if I was using a wrong rule or misconfigured. Regarding Nginx LimitReqModule, I made a question on that before. you can check my profile. I did a lot of test but couldn't find the proper value. It was slowing browsing again. – xperator May 10 '12 at 21:53