0

This is my current configuraiton...

proxy_buffering on;
proxy_buffer_size 32k;
proxy_buffers 128 32k;
proxy_send_timeout 20;
proxy_read_timeout 20;
#proxy_max_temp_file_size 1m;
proxy_temp_path /dev/shm/nginx_proxy_buffer;
proxy_pass $url;

I used not to have proxy_buffering enabled, however my new servers have very high %age of software interrupts (%si), so the CPU becomes a bottleneck when my reverse proxy handles about 300 mbit.

With proxy buffering the software interrupt drops and I get transfer rates of almost the full gbit which the servers are connected to.

However the incoming bandwidth is almost double the outgoing bandwidth! The rates fulctuate of course, but on average i have almost double the incoming rate which I don't understand. This is very bad because my 95%ile billing takes the max of in/out ...

It is my understanding that if a user cancels a download the data that has already been transferred from the source server into the buffer will get lost, which would result in this behaviour. However it is absurd that this occurs and causes 100% overrage...

enter image description here

Any input is appriciated!

The Shurrican
  • 2,240
  • 7
  • 39
  • 60
  • Are you sure that your graph is correct, and this cyan line does not represent sum of in and out? – VBart Nov 17 '12 at 16:20
  • Absolutely. Iptraf and vnstat confirm the stats. I just ran vnstat over two hours and the results are 136.19 Mbit/s rx and 97.81 Mbit/s tx – The Shurrican Nov 17 '12 at 17:08
  • ... maybe the backends are inefficiently drip feeding nginx (in) which buffers and efficiently sends to clients (out)... could the half full TCP packets cause this? – KCD Jan 26 '14 at 21:13

1 Answers1

0

Do you have gzip enabled for clients? That could account for the difference since nginx <-> backend connections aren't compressed by default (I can't remember if the recent http/1.1 backend support and gunzip filter module allow you to safely enable gzip between nginx and the backend server or not).

EDIT: This doesn't explain why you don't see this behavior with proxy_buffering disabled, though. Maybe more clients disconnect if they have to wait?

kolbyjack
  • 8,039
  • 2
  • 36
  • 29