5

We have webservers running nginx 7.65 along with fastcgi PHP5 and are looking into caching possibilities to speed up content delivery and lower system loads. The servers run different (custom) applications.

There are so much options for caching I am not sure what would be a sane setup. There is memcached, APC, Nginx' fastcgi_cache, proxy_cache....

I know memcached has the distributed ability as bonus but we do not need it at this point. In my experience memcached performs slower than APC if installed on the same machine as it is serving for, but this was some time ago.

I am not familiar with the Nginx fastcgi_cache or even the regular proxy_cache module. Is it comparable or is it something completely different?

What would be a good, sane caching method for Nginx w. FastCGI PHP5?

Matt
  • 295
  • 2
  • 10

4 Answers4

3

If your box can handle the entire cache on its own, memcache will only slow you down. APC is shared memory. Used right it will blow away memcached. Nginx fast cgi cache will make all dynamic php pages scream. Even if you set the cache to just 10 seconds, this makes it such that that max hit rate to any given php page will be once every 10 seconds. Makes it impossible to crash a page with load. I run a bunch of websites on a single small box that gets millions of people per month. All you need is nginx caching and APC at this point.

Memcache comes into the picture only when you have to scale your cache beyond a single box.

jake paine
  • 31
  • 1
  • 1
    Is this "Nginx fast cgi cache will make all dynamic php pages scream." meant in a good or bad way? – Alix Axel Dec 02 '12 at 21:57
  • 2
    Meant in a good way. Effectively you are serving static files because you cache the output and don't have to regenerate the page in php. – Jeff Widman May 17 '15 at 07:37
2

http://php-fpm.org/ is what we've used on a few recent installations rather than FastCGI itself.

Nginx can serve pages right from memcached, so, your application could write the pages direct to memcached. Otherwise, you'd need to make sure your code utilized memcached for queries/objects. An opcode cache will help depending on your codebase. If you have a small set of scripts that are repeatedly run, APC or XCache (or eaccelerator in some cases) can provide a nice boost.

Your caching method is determined by your code. Can you cache pages? fragments? sql results? values? What is the lifetime of those entities, how much space do they need, how large is the key and result set? As for memcached being slower than APC, since they don't really perform the same task, I'm not sure what you compared.

karmawhore
  • 3,865
  • 18
  • 9
  • Our own code can be adapted to anything - which is why I ask. What would be good practice? My comparison was on Drupal with memcached vs. APC on the same box with a caching module that can write to both (and file). APC was noticably quicker in this scenario. – Matt Jun 29 '10 at 20:44
  • 1
    APC and Memcache don't really handle the same thing. And I would doubt that module used the true power of memcache/nginx which would have blown away the performance of drupal with APC. Caching is a very subjective issue and depends a lot on what data is presented that can be cached. Certain data needs to be very 'live' whereas other data could be older. Once you get into some situations, you end up having to use fragment caching to keep up. You really need to figure out what can be cached and work backwards from that. – karmawhore Jun 29 '10 at 21:10
  • Thanks - I understand my question seems somewhat backwards, but since there are so many caching options I am simply wondering which **in general** is a good setup for PHP powered apps. What I want to cache is mainly blocks of processed PHP that output html, so 'pages' basically. Not looking to cache SQL queries at this point. I take it memcached/nginx is very powerful as you eliminate the PHP layer completely to access the cached objects, but are the alternatives a lesser choice per se? Or should you go for opcache and memcached? Can it hurt to use both? – Matt Jun 30 '10 at 06:05
  • Yes, memcached and APC has quite a bit different intents, but don't apc_store() and Memcache::set() basically do the same thing? So IMHO comparing their speed based on this context is not nonsense (and is done before). If there is any chance that the site would span more than a single server in the future, Memcached is the way to go. Nevertheless, you still should use an opcode cache (i.e. APC). – Halil Özgür Jun 08 '11 at 11:55
1

IMHO we sysadmins tend to focus on this problem backwards, starting at the back end, because thats our turf. The most effective stuff is really at the front end. If you can get the browser cache (http headers) and http cache (cdn, headers again) part right you can do astonishingly sloppy things at origin and be fine.

cagenut
  • 4,848
  • 2
  • 24
  • 29
  • True, but not all our apps can be customized at code level. We also have some standard applications like forum software. In those scenario's we're looking at 'easy speed gain'. – Matt Jun 29 '10 at 20:46
  • Front-end caching is not code-level, it's usually web-server level. Setting proper expire and cache headers will help you a bit, though I disagree that it's a lot when using Nginx as Nginx is already awesome at handling static files. – Martin Fjordvald Jun 29 '10 at 20:53
  • @Matt, well if you can't customize the code then you can't do anything with apc or memcache either (unless its built in already). @Martin F: its extremely code-level when done "right" (via the app framework). Also, no server, no matter how fast, is faster at sending a file than *not* sending a file. More imporant than the individual user experience performance aspect though is the capacity impact. HTTP requests never sent are HTTP requests your stack doesn't spend cycles on. – cagenut Jun 29 '10 at 21:02
1

Nowadays you can substitute PHP-FPM with HHVM which will combine the performance of PHP-FPM + APC and gives you amazing speeds: hhvm+nginx+fastcgi_cache

Tero Kilkanen
  • 36,796
  • 3
  • 41
  • 63
l33tcodes
  • 31
  • 1
  • 4