0

I am configuring a server to be used as nginx server for a very heavy traffic website. It is expected to receive traffic from a large number of IP addresses simultaneously. It is expected to get 500Req/Second with atleast 20Million unique IPs connecting it.

One of the problems I noticed in my previos server was related to iptables / ipconntrack. I am not aware of this behaviour and would be glad to know which all parameters of a ubuntu / debian (32/64) bit machine should I tweek to get maximum performance from the server. I can put in a lot of RAM on the server but mission critical task is the response times. We ideally dont want any connection to be hanging / timing out / waiting and want as low as possible overall response times.

Zypher
  • 37,405
  • 5
  • 53
  • 95
Sparsh Gupta
  • 1,127
  • 7
  • 21
  • 31

3 Answers3

0

Do you actually need iptables? If you are looking to get that much performance out of a single box, I'd suggest just removing it entirely. If you carefully configure the machine by removing all services except for nginx, configuring SSH to listen on a non-public interface (VPN, lan, etc), then you might be able to get away without a firewall. That would at least get rid of your one issue.

Are you trying to do this on one webserver, or a few of them? Even a simple DNS round robin would help you spread the load to a few different machines. You would definitely want multiple servers for reliability as well.

devicenull
  • 5,622
  • 1
  • 26
  • 31
0

500 requests per second really isn't all that much, if all you're doing is serving relatively small, static files. On the other hand, if they're large, or complex—session-based or DB-dependent, for example—then that's quite a workload.

Consider standing up a reverse proxy like Varnish in front of this solution, set to use a malloc pool as cache. Properly-tuned VCL would allow you to buffer most of the site in memory, meaning that nginx would only have to serve a few, select bits. Also be sure to set noatime on the filesystem.

BMDan
  • 7,249
  • 2
  • 23
  • 34
0

This question is pretty broad. My best advice to you is to take a step back, and think about how you are going to scale your application. Do you want to scale up (a few large servers) or out (a lot of little servers) or possibly a combination. Once you figure out your scaling strategy you can design a HA strategy around that as well.

I highly doubt you will be seeing 20MM uniques the second the site launches (Just to give you some perspective, that would make it a top at least a top 200 website).

Have a good plan to scale with your traffic, and don't run on the edge of your servers ability, allow yourself some headroom for spikes, and to get new equipment in as your traffic grows.

We get these questions every now and again. It's good to be thinking of the future, but don't plan to have an infrastructure that will be able to handle 20MM/60MM/100MM uniques off the bat, you'll be wasting your money and will have infrastructure that is largely sitting there idle.

Now to answer your question, we at Stack Overflow (currently) use iptable and are running conntrack modules on our front end routers with no issue. I would suggestion posting a new question with details on the exact problem that you are seeing when running iptables/*conntrack* under load.

And finally some good reading

How we run S[OFU]/Stack Exchange
High Scalability Blog

Zypher
  • 37,405
  • 5
  • 53
  • 95
  • Thanks Zypher for the answer. We already have an architecture which is doing around 400-420req/second. But the current architecture is giving us loads of troubles. We have a Varnish setup but when we moved traffic to varnish directly, we started getting a lot of Waiting Connections. Our munin graph gave us an idea that something is going wrong with Ipconn settings. – Sparsh Gupta Dec 22 '10 at 14:50