2

I'm having some concurrency issues with my web application, where there is a write to the DB that is done, and there may also be simultaneous reads. The write first deletes all the rows and then inserts new ones, so there is the chance that the reads will be done while the database is empty, resulting in an error. I'm using the ReentrantReadWriteLock, but want to make sure that I'm using it correctly. Any fixes would be appreciated.

private static final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();

public static Schedule lookupSchedule()
{
    lock.readLock().lock();
    try
    {
        // read data from the database, manipulate it, return it
    }
    finally
    {
       lock.readLock().unlock();
    }
}

public static void setSchedule()
{
    Connection conn = DB.getConnection();
    lock.writeLock.lock();
    try
    {
       conn.openTransaction();
       // delete all the rows from the table
       // insert all the new rows into the table
       conn.committTransaction();
    }
    catch (Exception ex)
    {
       conn.rollbackTransaction();
    }
    finally
    {
        lock.writeLock.unlock();
    }
}
bluedevil2k
  • 9,366
  • 8
  • 43
  • 57
  • 1
    This look right to me. How do you recycle your connection? – Peter Lawrey May 21 '14 at 14:30
  • 1
    The connection gets closed in either the commitTransaction() or rollbackTransaction() methods, and is coming from a DB pool. – bluedevil2k May 21 '14 at 14:31
  • You might want to move getting the database connection inside your write lock, since it looks like you have it set up that way for the lookup method. Otherwise you could have `lookupSchedule()` obtain a read lock, then before it gets the db connection, a bunch of writers could grab all of the db connections then block on the write lock, causing deadlock. – azurefrog May 21 '14 at 14:39
  • Why do you even use the locks? The ACID properties of the database should ensure that readers use whatever data is in the database before the writer transaction, even while the writer is working... – Alexandros May 21 '14 at 14:58
  • Also, please see: http://stackoverflow.com/questions/11043712/what-is-difference-between-non-repeatable-read-and-phantom-read In theory you could be seeing phantom read behavior... You may just need to adjust the isolation level. – Alexandros May 21 '14 at 15:06
  • The write lock is exclusive which means there are no reads going on while `setSchedule` is executing which in turn means `lookupSchedule` should never see an empty table (unless `setSchedule` does not insert rows). Something other than the lock is not quite right ... – vanOekel May 21 '14 at 21:06

1 Answers1

1

As far as the question is concerned: your usage of ReentrantReadWriteLock is correct.

With regard to your problem: I think you need to look toward the transaction isolation level in effect (and possibly adjust it).

Hope this helps...

Isolation levels explanation in another SO thread: what is difference between non-repeatable read and phantom read?

Java tutorial chapter on isolation levels: http://docs.oracle.com/javase/tutorial/jdbc/basics/transactions.html#transactions_data_integrity

Community
  • 1
  • 1
Alexandros
  • 2,097
  • 20
  • 27
  • I would have thought the transaction nature of the update would protect it as well, but that's not what I was seeing in practice. The isolation level for these transactions is set to TRANSACTION_READ_UNCOMMITTED, which judging from your comments, would need to change to TRANSACTION_READ_COMMITTED to get this to work correctly without the locks. – bluedevil2k May 21 '14 at 16:17
  • Exactly; please try to set the isolation level and let us know if it worked for you. Also, like I mentioned before, with proper isolation you can just execute the transactions and not even bother with a ReentrantReadWriteLock lock. The DB should take care of it for you (even if you have multiple writers). – Alexandros May 21 '14 at 17:18
  • According to the java tutorial, it seems that only TRANSACTION_SERIALIZABLE will prevent phantom reads (see the table in the chapter referenced in the answer). I hope can live with that... It is the most expensive option, but I think it's the only one that will achieve what you want. You may want to create the new data in another table and then move them into the original table on server side (stored proc) to reduce the span of the transaction (or some other trick like that). – Alexandros May 21 '14 at 18:03
  • With isolation level TRANSACTION_READ_COMMITTED the `lookupSchedule` method should never see an empty table since delete&insert is in one transaction, i.e. there is no moment in time that `lookupSchedule` would see an empty table (because the delete is never committed without an insert and uncomitted changes are never seen by `lookupSchedule`). I do not think TRANSACTION_SERIALIZABLE will help here. – vanOekel May 21 '14 at 21:21