0

We have an issue with Caffeine cache. We configured it to be of size 60, with TTL of 300 seconds like this:

Cache<String, String> cache = Caffeine.newBuilder()
                .expireAfterWrite(300, TimeUnit.SECONDS)
                .maximumSize(60)
                .removalListener((String key, String value, RemovalCause cause) -> {
                    cacheListenerHandler(key, value, cause);
                })
                .build();

Now, the removalListener is defined like this:

private void cacheListenerHandler(String key, String value, RemovalCause cause) {
        if (RemovalCause.EXPIRED.equals(cause)) {
            if (value != null) {
                LOG.info("We got TTL expiry of key {} and value {}",
                        key, value);
            } else {
                LOG.warn("Value is null for TTL expiry! key: {}", key);
            }
        }

        if (RemovalCause.SIZE.equals(cause)) {
            if (value != null) {
                LOG.info("We got SIZE expiry of key {} and value {}",
                        key, value);
                //some logic
            } else {
                LOG.warn("Value is null for SIZE expiry! key: {}", key);
            }
        }
    }

With that being said, we insert to the cache this way:

public void registerValue(String key, String value) {
        cache.put(key, value);
        LOG.info("Key {} was added with value {}. Current estimated size of {} keys in cache",
                key, value, cache.estimatedSize());
}

The issue is that sometimes we get logs such as:

Key 'key1' was added with value 'value1'. Current estimated size of 250 keys in cache

And we see constantly the eviction logs (of the listener method):

We got SIZE expiry of key 'key1' and value 'value1'

And a second later the log:

Key 'key2' was added with value 'value2'. Current estimated size of 251 keys in cache

Now, I know about the 'estimatedSize' nuance - it includes the keys that are going to be evicted, but the issue is that we get Java memory heap issues, meaning the actual removal happens too late for use.

Is there a solution for it? Maybe we need to switch to Guava instead?

Aladin
  • 492
  • 1
  • 8
  • 21
  • 1
    Can you provide more details on your heap issues and removal happening too late? There are settings that might help but it’s not clear what you want to fix. – Ben Manes Apr 06 '20 at 15:50
  • 1
    Without any clarifications, my best guess is to (1) Use `Caffeine.scheduler` to enable proactive removal of expired entries, rather than lazily when other operations occur. (2) Use `Caffeine.executor` to set a same-thread executor rather than evict asynchronously, if your `ForkJoinPool.commonPool` is already swamped by application tasks. – Ben Manes Apr 12 '20 at 04:57
  • Hi, sorry for late response and thank you for yours! Do I need to use both solutions (scheduler and executor) or using just one will be enough? I use (2) - .executor(Runnable::run) and the memory issues have gone. As for using the scheduler or (1) - we are currently stuck with Java 8 and as I understood the scheduler is ideal for Java 9 and above. – Aladin Apr 12 '20 at 10:46
  • 1
    That’s great, no need for the scheduler if the problems are resolved – Ben Manes Apr 12 '20 at 16:19

0 Answers0