2

I'm using something like

Cache<Integer, Item> cache;

where the Items are independent of each other and look like

private static class Item {
    private final int id;
    ... some mutable data

    synchronized doSomething() {...}
    synchronized doSomethingElse() {...}
}

The idea is to obtain the item from the cache and call a synchronized method on it. In case of a miss, the item can be recreated, that's fine.

A problem occurs when an item gets evicted from the cache and recreated while a thread runs a synchronized method. A new thread obtains a new item and synchronizes on it... so for a single id, there are two threads inside the synchronized method. FAIL.

Is there an easy way around it? It's Guava Cache, if it helps.

maaartinus
  • 44,714
  • 32
  • 161
  • 320
  • 1
    I might try using a separate `Striped` to acquire a per key lock? – Louis Wasserman Dec 12 '15 at 20:02
  • If the number of Items to be updated is less compared to the total number of items in the cache and if the mutable data in Item object is not too big then how about Synchronizing on deep clone of the Item object and then atomically updating the mutable data in corresponding Item in Cache. – akki Dec 14 '15 at 05:08

2 Answers2

2

I'm sure that there are multiple solutions for your issue. I wrote down one of them with using a unique lock for each ietmId:

public class LockManager {

private Map<Integer, Lock> lockMap = new ConcurrentHashMap<>();

public synchronized Lock getOrCreateLockForId(Integer itemId) {
    Lock lock;
    if (lockMap.containsKey(itemId)) {
        System.out.println("Get lock");
        lock = lockMap.get(itemId);
    } else {
        System.out.println("Create lock");
        lock = new ReentrantLock();
        lockMap.put(itemId, lock);
    }
    return lock;
}

public synchronized Lock getLockForId(Integer itemId) {
    Lock lock;
    if (lockMap.containsKey(itemId)) {
        System.out.println("get lock");
        return lockMap.get(itemId);
    } else {
        throw new IllegalStateException("First lock, than unlock");
    }
}

}

So, instead of using synchronised methods in class Item use LockManager to get Lock by itemId and call lock.lock() after it was retrieved. Also note that LockManager should have singleton scope and the same instance should be shared across all usages.

Below you can see example of LockManager using:

            try {
            lockManager.getOrCreateLockForId(itemId).lock();
            System.out.println("start doing something" + num);
            try {
                Thread.sleep(5000);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            System.out.println("completed doing something" + num);
        } finally {
            lockManager.getLockForId(itemId).unlock();
        }
evgeniy44
  • 2,862
  • 7
  • 28
  • 51
  • There's the problem of accumulating an unlimited number of locks. Probably acceptable in many cases.... – maaartinus Dec 15 '15 at 00:36
  • After doing unlock it makes sense to remove lock from the map. I just wanted to show you an approach. It can need some changes to be added in real application – evgeniy44 Dec 15 '15 at 22:45
2

I think the suggestion from Louis, using the the keys for locking is the most simple and practical one. Here is code some snippet, that, without the help of Guava libraries, illustrates the idea:

static locks[] = new Lock[ ... ];
static { /* initialize lock array */ } 
int id;
void doSomething() {
  final lock = locks[id % locks.length];
  lock.lock();
  try {
    /* protected code */
  } finally {
    lock.unlock();
  } 
}

The size of the lock array limits the maximum amount of parallelism you get. If your code is only using CPU, you can initialize it by the number of available processors and this is the perfect solution. If your code waits for I/O you might need an arbitrary big array of locks or you limit the number of threads that can run the critical section. In this case another approach might be better.

Comments on a more conceptual level:

If you want to prevent the item from being evicted, you need a mechanism called pinning. Internally this is used by most cache implementations, e.g. for blocking during I/O operations. Some caches may expose a way to do it by the applications.

In a JCache compatible cache, there is the concept of an EntryProcessor. The EntryProcessor allows you to process a peace of code on an entry in an atomic way. This means the cache is doing all the locking for you. Depending of the scope of the problem, this may have an advantage, since this also works in clustered scenarios, which means the locking is cluster wide.

Another idea which comes to my mind is the vetoable eviction. This is a concept EHCache 3 is implementing. By specifying a vetoable eviction policy you can implement a pinning mechanism on your own.

cruftex
  • 5,545
  • 2
  • 20
  • 36
  • Guava's version is to use a weight of zero, making the entry not eligible due to a size constraint. Ehcache's veto is only a hint and may not be honored, so its contract is a little surprising. – Ben Manes Dec 13 '15 at 20:05
  • @BenManes The problem is that "Weights are measured and recorded when entries are inserted into the cache, and are thus effectively static during the lifetime of a cache entry.", while I'd need a pinning for the duration of the lock. – maaartinus Dec 15 '15 at 00:26
  • 1
    @maaartinus Sorry, I was merely responding to how veto works in Guava and not making a suggestion for solving your issue. Your issue feels like the problem users face when they use a cache as an object pool (exclusive access to the resource on read). You may want to look for a "keyed object pool" library instead. – Ben Manes Dec 15 '15 at 01:02