3

I have a requirement in my Java application to execute a set of DB statements in an atomic & isolated fashion. For example, the application needs to read data rows from one table and update a data row in another table.

QueryRunner queryRunner = new QueryRunner(); // DBUtils query runner

Object[] params = new Object[] { param };

Connection conn = null;
try {
    conn = ...; // get connection
    conn.setAutoCommit(false);
    result = queryRunner.query(conn, "select x, y, z from table1 where column1 = ?", new BeanHandler<SomeBean>(SomeBean.class), params);

    // logic to get value for update

    queryRunner.update(conn, "update table2 set p = ? where q = ?", some_value, some_id);
    conn.commit();
} catch (SQLException e) {
    //
} finally {
    DBUtils.closeQuietly(conn);
}

The transaction management is achieved by setting auto commit to false for the connection and explicitly commit later on, as shown above. But the above code can also be executed in a multi-thread environment and I also want the two DB statements (select & update) to be run as a whole mutual exclusively.

I have some idea to use a shared Java Lock object in that method, depicted below.

In the class,

private Lock lock = new ReentrantLock(); // member variable

In the method,

lock.lock();
try {
    conn = ...; // get connection
    conn.setAutoCommit(false);

    result = queryRunner.query(conn, "select x, y, z from table1 where column1 = ?", new BeanHandler<SomeBean>(SomeBean.class), params);

    // logic to get value for update

    queryRunner.update(conn, "update table2 set p = ? where q = ?", some_value, some_id);
    conn.commit();
} finally {
    DBUtils.closeQuietly(conn);
    lock.unlock();
}

It seems sort of capable of solving the issue. However, I am wondering if this is the best practice and is there any better alternative (such as frameworks) for this?

garlicbulb
  • 306
  • 3
  • 11

4 Answers4

1

My suggestion is to have the database manage those locks for you instead of your application. This handles the case where there are multiple JVM's running the code. The locking mechanims you mentioned can only be effective in a single JVM.

The way to accomplish this is to do a SELECT ... FOR UPDATE. This will place a lock on the selected rows and the lock will be released when your transaction is committed or rolled back. This is better than a table level lock because those rows can still be read by other transactions that just want to read the current value but not update them. If another transaction tries to obtain a FOR UPDATE lock, then it will block until the first one finishes.

Hiro2k
  • 5,254
  • 4
  • 23
  • 28
  • Thanks, Hiro2k. Our application is running in a servlet container and will be in a single JVM, I believe. The `SELECT ... FOR UPDATE` might not work my case as I am updating a row in a table other than the table where select statement is operating on. – garlicbulb Apr 04 '12 at 17:36
  • Right, but if you ever decide to add another instance of the server for load balancing or scaling then you'll run into this problem. You could add the `SELECT ... FOR UPDATE` on the row in the second table at the very beginning of your method, so that others will have to wait until it finishes. – Hiro2k Apr 04 '12 at 18:05
  • Good point, @Hiro2k. I will evaluate our use cases on my end to see if all of them can be covered. – garlicbulb Apr 04 '12 at 18:09
  • By using `SELECT ... FOR UPDATE`, it helps keeping multiple updates from operating on the same piece of data at the same time. Does it also prevent current reads on the same piece of data? – garlicbulb Apr 04 '12 at 18:46
  • Like I said in the answer it only prevents other `SELECT ... FOR UPDATES` on that data. Usually you want to allow simple `SELECT` statements to happen to allow for better concurrency. Imagine trying to get a report on the current table but having to wait for all of the rows to release their locks. If you want to block all reads from the table while you are working with it then you need a Table Level lock, but maybe your database has finer grained controls. Everything I've said up until now is true for Postgres. http://www.postgresql.org/docs/9.1/static/explicit-locking.html – Hiro2k Apr 05 '12 at 01:00
1

The only way you will achieve the atomicy you are requiring is to use a stored procedure in the database to isolate the data and lock it all at once. The locking at the Java level can't do what the locking in the database can as easily.

  • My proposition only works only if the relevant data is updated through the method calls on the same object in a single JVM. It does not put a lock on database level per se. That being said, it is not preventing other unexpected updates on the same piece of data triggered somewhere else (such as other method calls, other application). Can this be mitigated by utilizing stored procedure? – garlicbulb Apr 04 '12 at 17:51
1

Another way you can handle problems like this is to use the serializable transaction isolation level for all of your database transactions. This causes any set of transactions to behave as though they were run one at a time, without actually making them run one at a time. If you do this you should be using a framework which catches serialization failures (SQLState 40001) and retries the transactions. The big up-side is that you don't need to worry about particular interactions among transactions -- if a transaction does the right thing when it is the only thing running, it will do the right thing in any transaction mix.

Note that all transactions must be serializable for this to work so simply.

kgrittn
  • 18,113
  • 3
  • 39
  • 47
0

From my understanding, do you just want to make that block of code containing select and update statements as Thread safe? That's what synchronized keyword is used for. Even though this question is asked long back i just want to make a note of it here. put those lines of code under synchronized block.

Mohamed Iqzas
  • 976
  • 1
  • 14
  • 19