0

I'm trying to find the technical term for the following (and potential solutions), in a distributed system with a shared cache:

  • request A comes in, cache miss, so we begin to generate the response for A
  • request B comes in with the same cache key, since A is not completed yet and hasn't written the result to cache, B is also a cache miss and begins to generate a response as well
  • request A completes and stores value in cache
  • request B completes and stores value in cache (over-writing request A's cache value)

You can see how this can be a problem at scale, if instead of two requests, you have many that all get a cache miss and attempt to generate a cache value as soon as the cache entry expires. Ideally, there would be a way for request B to know that request A is generating a value for the cache, and wait until that is complete and use that value.

I'd like to know the technical term for this phenomenon, it's a cache race of sorts.

Octodone
  • 515
  • 6
  • 13

1 Answers1

0

It's a kind of Thundering Herd

Solution: when first request A comes and fills a flag, if request B comes and finds the flag then wait... After A loaded the data into the cache, remove flag.

If all other request are waked up by the cache loaded event, would trigger all thread "Thundering Herd". So also need to care about the solution.

For example in Linux kernel, only one process would be waked up, even several process depends on the event.

Tongxuan Liu
  • 270
  • 1
  • 12