4

Well, I got an assignment [mini-project] in which one of the most important issues is the database consistency. The project is a web application, which allows multiple users to access and work with it. I can expect concurrent querying and updating requests into a small set of tables, some of them connected one to the other (using FOREIGN KEYS).

In order to keep the database as consistent as possible, we were advised to use isolation levels. After reading a bit (maybe not enough?) about them, I figured the most useful ones for me are READ COMMITTED and SERIALIZABLE.

I can divide the queries into three kinds:

  1. Fetching query
  2. Updating query
  3. Combo

For the first one, I need the data to be consistent of course, I don't want to present dirty data, or uncommitted data, etc. Therefore, I thought to use READ COMMITTED for these queries. For the updating query, I thought using SERIALIZABLE will be the best option, but after reading a bit, i found myself lost. In the combo, I'll probably have to read from the DB, and decide whether I need/can update or not, these 2-3 calls will be under the same transaction.

Wanted to ask for some advice in which isolation level to use in each of these query options. Should I even consider different isolation levels for each type? or just stick to one?

I'm using MySQL 5.1.53, along with MySQL JDBC 3.1.14 driver (Requirements... Didn't choose the JDBC version)

Your insights are much appreciated!

Edit:

I've decided I'll be using REPEATABLE READ which seems like the default level. I'm not sure if it's the right way to do, but I guess REPEATABLE READ along with LOCK IN SHARE MODE and FOR UPDATE to the queries should work fine...

What do you guys think?

stivlo
  • 83,644
  • 31
  • 142
  • 199
fashasha
  • 481
  • 2
  • 7
  • 19
  • Just use `READ COMMITTED`, which is the default. Most people never even think about db isolation level, and it works just fine. – Bohemian Sep 17 '11 at 21:04
  • Yeah, but I have some transactions which should update multiple tables, and in parallel there might be transactions which need to read the entire tables [which are currently being updated]... So I might be needing locks. Is **READ_COMMITTED** enough for issues like these? Because I think they're going to test every pitfall... – fashasha Sep 18 '11 at 14:26
  • Repeatable read will work, but is slower than read_committed. There's a reason read committed is the default. Make sure you use `start transaction + commit` to bracket your transactions because otherwise all your queries will run in their own little transactions, defeating the purpose. – Johan Oct 14 '11 at 20:56
  • 2
    Oops, repeatable read is the default, sorry 'bout that. – Johan Oct 14 '11 at 21:01

2 Answers2

2

I would suggest READ COMMITTED. It seems natural to be able to see other sessions' committed data as soon as they're committed.

Its unclear why MySQL has a default of REPEATABLE READ.

vharihar
  • 69
  • 7
-3

I think you worry too much about the isolation level.

If you have multiple tables to update you need to do:

START TRANSACTION;
  UPDATE table1 ....;
  UPDATE table2 ....;
  UPDATE table3 ....;
COMMIT;  

This is the important stuff, the isolation level is just gravy.

The default level of repeatable read will do just fine for you.

Note that select ... for update will lock the table, this can result in deadlocks, which is worse than the problem you may be trying to solve.
Only use this if you are deleting rows in your DB.

To be honest I rarely see rows being deleted in a DB, if you are just doing updates, then just use normal selects.

Anyway see: http://dev.mysql.com/doc/refman/5.0/en/innodb-transaction-model.html

Johan
  • 74,508
  • 24
  • 191
  • 319