1

I'm working on scaling up a front-end proxy server, and up until yesterday I was using Squid as the reverse-proxy (even though basically nothing was being cached, i.e. Squid was proxying only). Today I tried changing to nginx and I've noticed that I'm hitting ip_conntrack limits a lot more quickly.

As a short-term workaround I'm just raising the ip_conntrack limits (as per http://rackerhacker.com/2008/01/24/ip_conntrack-table-full-dropping-packet) but I was wondering if anyone here knows why nginx is hitting these limits so much more quickly, and if anything can be done to rectify it? (i.e. have connections ejected from the tracking tables more quickly).

Things in use are an up-to-date Centos 5.5 box, nginx 0.8.53, and Squid 2.6. Everything is installed from RPMs (either core or the EPEL ones).

Thanks in advance for any advice or enlightening discussion.

For my own reference, this other thread was useful on this topic: Determine nginx reverse-proxy load limits

glenc
  • 273
  • 1
  • 8

2 Answers2

2

Using ip_conntrack for port 80 is a waste of resources. Mark these packets as NOTRACK and use ip_conntrack for other ports.

Alexander Azarov
  • 3,550
  • 21
  • 19
  • Thanks, man! I needed to avoid the tracking the connections of belonging to my NAT gateway and you just provided the solution. – lvella Apr 09 '12 at 04:12
0

I can't find documentation directly tied to conntrack, but look at this snippet from Nginx documentation:

In a reverse proxy situation, max_clients becomes

max_clients = worker_processes * worker_connections/4

Since a browser opens 2 connections by default to a server and nginx uses the fds (file descriptors) from the same pool to connect to the upstream backend

Nginx behavior with a default browser is to receive the two connections from it and open two connections to the backends (reverse proxying), so generating 4 total connections. That may be the reason why the conntrack is filling faster. Of course, this is just a semi-informed guess based on the nginx worker behavior.

coredump
  • 12,713
  • 2
  • 36
  • 56
  • That seems like a pretty reasonable explanation. I actually raised all the conntrack limits / hashsize by a x4 multiplier, so this should be about right if your theory is correct. – glenc Mar 29 '11 at 20:20