0

I have an application that at some point has to perform REST requests towards another (non-reactive) system. It happens that a high number of requests are performed towards exactly the same remote resource (the resulting HTTP request is the same). I was thinking to avoid flooding the other system by using a simple cache in my app.

I am in full control of the cache and I have proper moments when to invalidate it, so this is not an issue. Without this cache, I'm running into other issues, like connection timeout or read timeout, the other system having troubles with high load.

Map<String, Future<Element>> cache = new ConcurrentHashMap<>();

Future<Element> lookupElement(String id) {
    String key = createKey(id);
    return cache.computeIfAbsent(key, key -> {
        return performRESTRequest(id);
      }.onSucces( element -> {
             //  some further processing
      }
}

As I mentioned lookupElement() is invoked from different worker threads with same id. The first thread will enter in the computeIfAbsent and perform the remote quest while the other threads will be blocked by ConcurrentHashMap. However, when the first thread finishes, the waiting threads will receive the same Future object. Imagine 30 "clients" reacting to the same Future instance. In my case this works quite fine and fast up to a particular load, but when the processing input of the app increases, resulting in even more invocations to lookupElement(), my app becomes slower and slower (although it reports 300% CPU usage, it logs slowly) till it starts to report OutOfMemoryException.

My questions are: Do you see any Vertx specific issue with this approach? Is there a more Vertx friendly caching approach I could use when there is a high concurrency on the same cache key? Is it a good practice to cache the Future?

1 Answers1

0

So, a bit unusual to respond to my own question, but I managed to solve the problem.

I was having two dilemmas:

  1. Is ConcurentHashMap and computeIfAbsent() appropriate for Vertx?
  2. Is it safe to cache a Future object?

I am using this caching approach in two places in my app, one for protecting the REST endpoint, and one for some more complex database query. What was happening is that for the database query there was up to 1300 "clients" waiting for a response. Or 1300 listeners waiting for an onSuccess() of the same Future. When the Future was emitting strange things were happening. Some kind of thread strangulation. I did a bit of refactoring eliminating this concurrency on the same resource/key, but I did kept both caches and things went back to normal.

In conclusion I think my caching approach is safe as long as we have enough spreading or in other words, we don't have such a high concurrency on the same resource. Having 20-30 listeners on the same Future works just fine.