0

I have an application which requires certain key objects to be acquired at various different points by calling this method, assigning a Unique Persistent Object to every given Natural ID:

public AddressKeyEntity getAddressKeyEntity(AddressKeyEntity addressKey) {
    AddressKeyEntity item = sessionFactory.getCurrentSession().byNaturalId(AddressKeyEntity.class)
            .using("address", addressKey.getAddress())
            .using("city", addressKey.getCity())
            .using("state", addressKey.getState())
            .using("zip", addressKey.getZip())
            .with(LockOptions.UPGRADE.setTimeOut(LockOptions.WAIT_FOREVER))
            .load();
    if(item != null) {
        return item;
    } else {
        sessionFactory.getCurrentSession().persist(addressKey);
        return addressKey;
    }
}

This works great when the application is running in a single-thread or just not very busy. Unfortunately, as soon as I try to run a heavy load through multiple threads, it fails miserably with some combination of unique contraint violation or org.hibernate.PessimisticLockException. Sometimes also seeing org.h2.jdbc.JdbcSQLException: Timeout trying to lock table, which is strange considering the LockOptions.WAIT_FOREVER. (Maybe it's an H2 bug)

Can this be solved through the Hibernate APIs, without resorting to low-level database features?

All major databases support some variant of insert if not exists atomically returning a unique ID, so I imagine there must be an idiomatic equivalent in Hibernate.

Alex R
  • 11,364
  • 15
  • 100
  • 180
  • Let us know what kind of transaction isolation level are you using for https://docs.oracle.com/javase/tutorial/jdbc/basics/transactions.html – Duloren Apr 16 '18 at 22:12
  • I have not set any transaction isolation level. – Alex R Apr 16 '18 at 23:18

2 Answers2

0

First of all, consider to use (i) explicit transaction mechanism or (ii) container managed transactions (if you are running your application inside a container like an application server). The autocommit mode shouldn't be used in production environments due performance issues. So, define your Unit of Work. See more here: https://docs.jboss.org/hibernate/orm/3.6/reference/en-US/html/transactions.html

Secondly, try to test your app with other databases vendors if you have a heavy load. Some people, and it is my opinion as well, see H2 not for production use yet, just for development purposes or test environments. See this question Are there any reasons why h2 database shouldn't be used in production?

Thirdly, according with H2 docs READ COMMITTED is the default isolation level. See if is suitable for your application requirements and change it if necessary. In addition, consider to use LOCK_MODE = 3. See more here: http://www.h2database.com/html/advanced.html#transaction_isolation

Finally, regardless the db vendor of your choice, ask yourself if this amount of DB readings and locks for insert new entities are actually required. For example, depending how much different keys you have, maintain a cache of persisted Ids in memory can reduce the amount of readings and improve performance.

Duloren
  • 2,395
  • 1
  • 25
  • 36
0

Can this be solved through the Hibernate APIs, without resorting to low-level database features?

Hibernate doesn't provide any such feature.

All major databases support some variant of insert if not exists atomically returning a unique ID, so I imagine there must be an idiomatic equivalent in Hibernate.

No hibernate equivalent. If the DB supports the feature you can use @SqlInsert and specify this db specific insert statement to be used. This informs Hibernate to use this specific insert statement to be used instead of normal insert. But this could mean you are using DB specific feature and may not be portable.

On a different note, we can improve this process if we can detect duplicate records early in the processing stage. To this extent, we could maintain a ConcurrentHashMap that tells if same record has already been inserted. To prevent OOM errors map it could be backed by LRU algo or a weak hashmap.