Rails does not have a built-in mechanism to prevent cache stampedes.
According to the README for atomic_mem_cache_store
(a replacement for ActiveSupport::Cache::MemCacheStore
that mitigates cache stampedes):
Rails (and any framework relying on active support cache store) does
not offer any built-in solution to this problem
Unfortunately, I'm guessing that this gem won't solve your problem either. It supports fragment caching, but it only works with time-based expiration.
Read more about it here:
https://github.com/nel/atomic_mem_cache_store
Update and possible solution:
I thought about this a bit more and came up with what seems to me to be a plausible solution. I haven't verified that this works, and there are probably better ways to do it, but I was trying to think of the smallest change that would mitigate the majority of the problem.
I assume you're doing something like cache model do
in your templates as described by DHH (http://37signals.com/svn/posts/3113-how-key-based-cache-expiration-works). The problem is that when the model's updated_at
column changes, the cache_key
likewise changes, and all your servers try to re-create the template at the same time. In order to prevent the servers from stampeding, you would need to retain the old cache_key for a brief time.
You might be able to do this by (dum da dum) caching the cache_key of the object with a short expiration (say, 1 second) and a race_condition_ttl
.
You could create a module like this and include it in your models:
module StampedeAvoider
def cache_key
orig_cache_key = super
Rails.cache.fetch("/cache-keys/#{self.class.table_name}/#{self.id}", expires_in: 1, race_condition_ttl: 2) { orig_cache_key }
end
end
Let's review what would happen. There are a bunch of servers calling cache model
. If your model includes StampedeAvoider
, then its cache_key
will now be fetching /cache-keys/models/1
, and returning something like /models/1-111
(where 111 is the timestamp), which cache
will use to fetch the compiled template fragment.
When you update the model, model.cache_key
will begin returning /models/1-222
(assuming 222 is the new timestamp), but for the first second after that, cache
will keep seeing /models/1-111
, since that is what is returned by cache_key
. Once 1 second passes, all of the servers will get a cache-miss on /cache-keys/models/1
and will try to regenerate it. If they all recreated it immediately, it would defeat the point of overriding cache_key
. But because we set race_condition_ttl
to 2, all of the servers except for the first will be delayed for 2 seconds, during which time they will continue to fetch the old cached template based on the old cache key. Once the 2 seconds have passed, fetch
will begin returning the new cache key (which will have been updated by the first thread which tried to read/update /cache-keys/models/1
) and they will get a cache hit, returning the template compiled by that first thread.
Ta-da! Stampede averted.
Note that if you did this, you would be doing twice as many cache reads, but depending on how common stampedes are, it could be worth it.
I haven't tested this. If you try it, please let me know how it goes :)