0

I have implemented a kind of caching bean to cache data objects as an EJB Singleton. I wonder if this is the correct way in EJB:

@Singleton
public class MyCache {

    int DEFAULT_CACHE_SIZE = 30;
    int DEFAULT_EXPIRES_TIME = 60000;
    long expiresTime = 0;
    long lastReset = 0;
    Cache cache = null; 

    ....
    @PostConstruct
    void init() {
        resetCache();
    }

    public void resetCache() {
        cache = new Cache(DEFAULT_CACHE_SIZE);
        lastReset = System.currentTimeMillis();
    }

    public void put(String key, Object value) {
        cache.put(key, value);
    }

    public Object get(String key) {
        // test if cache is expired
        if (expiresTime > 0) {
            Long now = System.currentTimeMillis();
            if ((now - lastReset) > expiresTime) {
                logger.finest("...... Cache expired!");
                resetCache();
            }
        }
        return cache.get(key);
    }



    class Cache extends LinkedHashMap<String, Object> implements Serializable {
        private static final long serialVersionUID = 1L;
        private final int capacity;

        public Cache(int capacity) {
            super(capacity + 1, 1.1f, true);
            this.capacity = capacity;
        }

        protected boolean removeEldestEntry(Entry<String, Object> eldest) {
            return size() > capacity;
        }
    }
}

My question is: is this the right way to implement an application wide caching mechanism?

I have the impression that the contents of the cache are unexpectedly changing. Could this happen? For example, if the EJB is passivated? I am running in a Payara41 Server.

Or must I use:

cache = Collections.synchronizedMap(new Cache(DEFAULT_CACHE_SIZE));

instead of:

cache = new Cache(DEFAULT_CACHE_SIZE);
Ralph
  • 4,500
  • 9
  • 48
  • 87

3 Answers3

1

First of all, as concurrency management is not specified for your bean, it falls down to default "container".

From EJB 3.1 spec:

When designing a Singleton session bean, the developer must decide whether the bean will use container managed or bean managed concurrency. Typically Singleton beans will be specified to have container managed concurrency demarcation. This is the default if no concurrency management type is specified.

Then, container concurrency management needs method-level specifications of lock type. As soon as those are absent, the default "Write" applies:

By default, if a concurrency locking attribute annotation is not specified for a method of a Singleton bean with container managed concurrency demarcation, the value of the concurrency locking attribute for the method is defined to be Write.

The above means that access to your bean methods must be synchronized, probably even more than you actually need. You can set "Read" locking type for read-only methods (get), to allow concurrent read access.

user3714601
  • 1,156
  • 9
  • 27
0

The solution with container managed lock has one drawback, consider if you have a put operation with WRITE lock it means the whole "cache" is blocked so no get or put is possible in parallel, no matter whether the key is different. If your cache implementation is a concurrent Map you can use it without locking.

If you have more requirements for the cache I would use Infinispan which provide a better performance. The cache here can be local or distributed in a cluster.

wfink
  • 347
  • 1
  • 6
  • Thanks for your reply. I think the missing part is the use of a concurrent map. I will improve my code in this way. – Ralph Feb 06 '19 at 22:59
0

After reading this blog from Adam Bien I now improved my code in the following way:

  • I added the annotation ConcurrencyManagement
  • I changed my LinkedHashMap into a ConcurrentHashMap.

Example:

@Singleton
@ConcurrencyManagement(ConcurrencyManagementType.BEAN) // added concurrency management
public class MyCache {

    int DEFAULT_CACHE_SIZE = 30;
    int DEFAULT_EXPIRES_TIME = 60000;
    long expiresTime = 0;
    long lastReset = 0;
    Cache cache = null; 

    ....
    @PostConstruct
    void init() {
        resetCache();
    }

    public void resetCache() {
        cache = new Cache(DEFAULT_CACHE_SIZE);
        lastReset = System.currentTimeMillis();
    }

    public void put(String key, Object value) {
        cache.put(key, value);
    }

    public Object get(String key) {
        // test if cache is expired
        if (expiresTime > 0) {
            Long now = System.currentTimeMillis();
            if ((now - lastReset) > expiresTime) {
                logger.finest("...... Cache expired!");
                resetCache();
            }
        }
        return cache.get(key);
    }

    // changed from LinkedHashMap to ConcurrentHashMap
    class Cache extends ConcurrentHashMap<String, Object> implements Serializable {
        private static final long serialVersionUID = 1L;
        private final int capacity;

        public Cache(int capacity) {
            super(capacity + 1, 1.1f);
            this.capacity = capacity;
        }

        protected boolean removeEldestEntry(Entry<String, Object> eldest) {
            return size() > capacity;
        }
    }
}

I think this is now the more correct implementation.

Ralph
  • 4,500
  • 9
  • 48
  • 87
  • I don't think this is a correct implementation. Your lastReset value is not volatile, so different thread may see differnt values since there is no memory synchronsiation semantics defined. Another difference to Adams code is, that Adam is creating the synchronized map once, during @PostConstruct. You are creating new instances on the fly. Even this is not volatile and I think, creation of cache is not atomic and not explicitly synchronized at all. – Frito Feb 11 '19 at 12:46