5

I've serveral backends (one is nginx+passenger) to combine via ESI. Since I don't want to go without gzip/deflate and SSL varnish can't do the job out of the box. So I thought about the following setup:

http://img693.imageshack.us/img693/38/esinginx.png

What do you think? overkill?

Roland
  • 369
  • 3
  • 9

5 Answers5

5

Do you need varnish at all?

1. nginx can cache results on disk or in memcached
2. nginx has SSI
3. nginx has fair load balancer or ey-balancer
4. Best practice says that HAProxy before nginx is good move.

Don't forget about KISS - more components your system has - less stable it becomes.

SaveTheRbtz
  • 5,691
  • 4
  • 32
  • 45
1

While I haven't personally used it, Nginx does have an ESI plugin:

http://github.com/taf2/nginx-esi

1

If ESI is an absolute must I'd recommend the following set up

User -> Nginx (gzip+proxy+ssl termination) -> Varnish (ESI) -> Ngnix App Server.

That way you don't have to delegate your ssl, gzip requests to one back end server, and the ESI requests to another.

Have Varnish strip the Accept-Encoding headers from the incoming requests, that way your backends won't try to gzip (iff they're configured to do so), and Varnish can parse your backend response objects for ESI includes. Varnish will then present to your Nginx proxy fully formed content. That leaves the Nginx proxy to do compression and SSL delivery.

I've got a very similar setup running in production (without the SSL termination), and I've found it works quite gracefully.

flungabunga
  • 111
  • 1
  • Then your ESI pages won't be gzipped? – Joris Sep 17 '10 at 06:16
  • Yup they do, because Nginx still receives the Accept-Encoding header, it takes the response from the Varnish server (be they ESI's, static, dynamic) and gzips it. – flungabunga Oct 27 '10 at 00:53
1

Based on the diagram, I'm not sure exactly what what you're trying to do (what is ESI?). However, there's a small, fast load-balancing front-end server called "pound" and it will handle the SSL layer for you. It could sit alongside Varnish on the front end on port 443 (I assume you have Varnish on port 80?) and pass the SSL traffic directly to nginx (SSL can't be cached anyway, so no point in going through Varnish). Normal, unencrypted traffic would go to Varnish as expected.

Geoff Fritz
  • 1,727
  • 9
  • 11
  • +1 for pointing, that SSL encrypted traffic could be cached properly, because it is encrypted using different keys per connection. Varnish should be placed between nginx frontend server and reverse proxy, where SSL is terminated. But this architecture is more complicated. – sumar Dec 19 '09 at 01:11
  • Adding a forking proxy in front of Nginx would cut performance. Just configure Varnish to not answer requests on port 443 (it won't anyhow) and let Nginx handle SSL. –  Apr 01 '10 at 19:25
0
Server       Requests per second
--------------------------------
G-WAN Web server       142,000 
Lighttpd Web server     60,000
Nginx Web server        57,000
Varnish Cache server    28,000

Save yourself the hurdle (and the bloat) of another intermediate layer. Just using a better server seems to be more efficient.

  • 1
    That link is all about serving small static files, which is not really what the original post is about; caveat emptor, etc. – nickgrim May 31 '11 at 10:59