3

How can I perform a get/compute lookup that is non blocking and will avoid a cache stampede.

Here is an example that will not stampede, but is blocking.

public static <KEY, VALUE> Mono<VALUE> lookupAndWrite(
    Map<KEY, Signal<? extends VALUE>> cacheMap, KEY key, Mono<VALUE> mono) {
    return Mono.defer(() -> Mono.just(cacheMap.computeIfAbsent(key, k -> 
        mono.materialize().block())).dematerialize());
}

Here is an example that will not block, but can stampede.

public static <KEY, VALUE> MonoCacheBuilderCacheMiss<KEY, VALUE> lookup(
        Function<KEY, Mono<Signal<? extends VALUE>>> reader, KEY key) {
    return otherSupplier -> writer -> Mono.defer(() ->
            reader.apply(key)
                    .switchIfEmpty(otherSupplier.get()
                            .materialize()
                            .flatMap(signal -> writer.apply(key, signal)
                            )
                    )
                    .dematerialize());
}

Is there an approach that will not stampede or block? Would it make sense to just subscribe the blocking call on its own scheduler?

akarnokd
  • 69,132
  • 14
  • 157
  • 192
Dave
  • 2,386
  • 1
  • 20
  • 38
  • Would using `AsyncLoadingCache` make sense here? Then you don't block during the computation and the cache adds callbacks for when the future value completes (successfully or by error). (P.S. `AsyncCache` will be in next release, but is easy to emulate) – Ben Manes Aug 14 '18 at 18:23
  • @BenManes I am not sure. The key does not contain enough information to know how to load the data (maybe that doesn't matter). – Dave Aug 14 '18 at 18:32
  • You can avoid the `get(key)` call and have the loader throw an exception, to degrade to an ad hoc `AsyncCache`. Then you can write your code in a similar manner as `cache.get(key, k -> mono.toFuture())` – Ben Manes Aug 14 '18 at 18:34
  • Since SO isn't great for discussions, we can bring this back to your [github issue](https://github.com/reactor/reactor-addons/issues/162) if you'd like. – Ben Manes Aug 14 '18 at 18:36
  • @BenManes If you think that is best. I brought it up here in hopes of getting some more eyes on it as this seems like it could be a very common problem which has yet to have been raised (for this framework). – Dave Aug 14 '18 at 18:42
  • Sure, either is fine. I'll write a small proposal as an answer for you to digest – Ben Manes Aug 14 '18 at 18:42
  • @BenManes Cool! – Dave Aug 14 '18 at 18:48

1 Answers1

3

To rephrase your question, you want to avoid stampeding while allowing the computation to be performed asynchronously. This would ideally be done using a ConcurrentMap<K, Mono<V>> with computeIfAbsent that will discard the entry if the computation fails.

Caffeine's AsyncLoadingCache provides this type of behavior by using CompletableFuture<V> as the value type. You could rewrite your blocking function as

public static <KEY, VALUE> Mono<VALUE> lookupAndWrite(
    AsyncLoadingCache<KEY, VALUE> cache, KEY key, Mono<VALUE> mono) {
  return Mono.defer(() -> Mono.fromFuture(cache.get(key, (k,e) ->
            mono.subscribeOn(Schedulers.fromExecutor(e)).toFuture())));
}

As of version 2.6.x, there is no simpler AsyncCache to hear feedback if desired and it will be in the 2.7 release. This will also include a ConcurrentMap<K, CompletableFuture<V>> view which would let you generalize your method to not have a provider-specific interface. For now, you can mimic a non-loading cache by avoiding the loading methods and using Caffeine.newBuilder().buildAsync(key -> null).

Dave
  • 2,386
  • 1
  • 20
  • 38
Ben Manes
  • 9,178
  • 3
  • 35
  • 39
  • Thank you sir, is there anyway to control if a Mono which is in error is cached. – Dave Aug 14 '18 at 19:25
  • 1
    No, as a synchronous `ConcurrentMap` wouldn't cache an exception or null value the async cache does the same for the future. If you want to a _negative cache_, then transform to a successful result that you can query, e.g. `Mono>`. That approach seems to cause the least confusion, keep the interfaces consistent, and not complicate the cache with different handling strategies. – Ben Manes Aug 14 '18 at 19:29