1

I have a setup of a VPS with several websites. Some of the websites are WP websites and some of them are other dynamic websites.

I'm interested in adding a sort of reverse proxy/caching layer. However I don't want to cache all the websites...

I've seen that a lot of people recommend using Varnish. The problem I found in Varnish is that it takes Port 80 and caches everything.

When I was looking for solutions, or ways to avoid cache for some websites, I found about fastcgi_cache. Apparently, you can cache directly through Nginx into files and then serve it statically. I've also seen somewhere that you can cache from Nginx into memcached, but I don't know how yet.

Anyway, here are my options: 1. Using Varnish and somehow tweaking the config files to pass requests based on Domain name. 2. Using fastcgi_cache on Nginx. 3. Using a sandwich. Having Nginx listening to port 80, serving the static files and sending all the php files to Varnish on another port which will pass all the uncached users to another Nginx instance.

What do you think that I should do?

Thanks.

tounano
  • 221
  • 3
  • 8

3 Answers3

4

You can have Varnish pass some requests through based on domain name, it's very easy and Varnish won't noticeably slow down the websites you aren't caching.

You just add a little VCL code in vcl_recv like:

if (req.http.Host == "www.pass_this_thru.com") {
    return(pass);
}
Pax
  • 335
  • 2
  • 8
  • 2
    Along with what @Pax has mentioned, the equivalent `return (hit_for_pass)` directive needs to be written in vcl_fetch block to "not to cache" anything for a particular domain. There is no reason to cache the requests when Varnish is not going to look up the cache. – Pothi Kalimuthu Jul 13 '12 at 04:26
3

I think this depends on the load you anticipate to get but the models chosen by both systems are architecturally different.

Firstly, nginx uses an event based model to handle requests whereas Varnish uses a thread based model.

Varnish places its cached content in a very efficient critbit tree. I couldnt find out what implementation that is used by nginx.

nginx should be more efficient because it uses a non-blocking event based model to spread load evenly with as little contention as possible, however if lookup times from the cache is much slower you could argue it cancels things out.

Varnish creates thread pools (normally of many threads, 500 or so) to handle multi processing. The cost here is in context switching especially if you have many requests to process.

The way I see it -- varnish will perform better as you scale up the number of cores you have to battle with the contention, plus its really very good caching algorithm makes its lookups and responses very fast. Use varnish if you have lots of cores and have very high traffic/content to deliver.

Nginx on the other hand takes a much less sledgehammer approach with managing resources and I reckon on small/medium caches on low-powered systems will probably work out better value in terms of efficiency and requests per second.

Overall, varnish works best on a dedicated system with at least 2 cpus/cores. It will linearly scale as you add cpus.

Nginx probably works best on a smaller multi-roled system where the cache pressure is not as hot. It will also scale linearly but I suspect its caching algorithm and implementation is not as good as the varnish one which could end up being a performance bottleneck as you reach high levels of traffic.

Matthew Ife
  • 23,357
  • 3
  • 55
  • 72
  • 2
    Nginx uses red-black tree for cache lookups. It balanced and may preform better than crit-bit especially for large caches. – VBart Jul 12 '12 at 10:23
  • 1
    cribit will typically do better in large caches as it does bitwise walking. Red-black (typically) does integer comparison. critbit is very cache-line friendly especially on SMP systems that can share L2 caches (like intel cpus). – Matthew Ife Jul 12 '12 at 16:56
1

I think this would be interesting to you: Test Results: Nginx & Varnish

VBart
  • 8,309
  • 3
  • 25
  • 26