The title of this question might be confusing but the problem is simple.
I'm using Zend_Cache with memcached as a backend. I have two module called "Last articles" and "Popular articles". Both of this module are on every pages and use a similar query such as :
Select * from table where status = 'published' and category = '' order by dateCreated|/popularity\
My table have 1.5 million rows so far. I have indexes on every field that I'm using in the previous query. I cache the recent articles for 1hour and the popular for 4hours. I have 4 web server (php5/apache2) and 1 database server (mysql). The table engine is innoDB.
The problem some time my cache expire right in the middle of a heavy load, which make my web site unavailable until those modules are cached again. I could had a new MYSQL server.
But is there a way to handle the caching in a smarter way? Like for example the server1 will try to refresh the cache while server 2,3 and 4 will still use the same value out of the cache.
I can write some code to do that, but I was wondering if there is way to do that directly with Zend_Cache? Of if there is a design pattern that i could apply to my problem?
[EDIT] I want something that I could scale up to 100 servers