0

I have two tables A and B.

My transactions are like this:

  • Read -> read from table A
  • Write -> write in table B, write in table A

I want to avoid dirty/phantom reads since I have multiple nodes making request to same database.

Here is an example:

  1. Transaction 1 - Update is happening on table B
  2. Transaction 2 - Read is happening on table A
  3. Transaction 1 - Update is happening on table A
  4. Transaction 2 - Completed
  5. Transaction 1 - Rollback

Now Transaction 2 client has dirty data. How should I avoid this?

Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
Raghunandan J
  • 584
  • 1
  • 6
  • 22
  • which RDBMS are you using? SQL Server, Oracle, MySQL? – Rich Benner Nov 13 '18 at 08:26
  • informix now. planning to move to SQL server – Raghunandan J Nov 13 '18 at 08:37
  • Are you using `DIRTY READ` isolation level in Informix? In informix it is the only way to read uncommitted data. – Luís Marques Nov 13 '18 at 13:52
  • Dirty read is not recommended I wanted to know if there are any other solutions. – Raghunandan J Nov 13 '18 at 14:45
  • Informix has other isolation levels (there are restrictions depending on the type of database). Have you check the documentation? What you want is probably `COMMITTED READ` or `COMMITTED READ LAST COMMITTED`. – Luís Marques Nov 13 '18 at 16:06
  • Your transaction outline isn't entirely clear — or, at least, it isn't clear how it yields problems. Step 1 isn't a problem as long as it is complete before Step 2 starts. Step 2 isn't a problem if it is complete before Step 3 starts — or it isn't clear that it's a problem because it isn't clear what TX1 (transaction 1). Step 3 is a potential problem if in fact Step 2 isn't finished before TX2 starts; it depends on which rows in table A are updated by TX1 and which are read by TX2, and when. Transaction 2 completes without altering the database. A lot hinges on the meaning of Step 2. – Jonathan Leffler Nov 14 '18 at 01:37
  • I will tell you by business use case - there are multiple clients GETting and UPDATEing the data spread across two tables. I don't want to give dirty data to a client-2 when client-1 hasn't yet committed his UPDATE. I'm trying to achieve this with two tables or else I will denormalize and do it in one table. – Raghunandan J Nov 14 '18 at 06:29

1 Answers1

1

If your database is not logged, there is nothing you can do. By choosing an unlogged database, those who set it up decided this sort of issue was not a problem. The only way to fix the problem here is change the database mode to logged, but that is not something you do casually on a whim — there are lots of ramifications to the change.

Assuming your database is logged — it doesn't matter here whether it is buffered logging or unbuffered logging or (mostly) a MODE ANSI database — then unless you set DIRTY READ isolation, you are running using at least COMMITTED READ isolation (it will be Informix's REPEATABLE READ level, standard SQL's SERIALIZABLE level, if the database is MODE ANSI).

If you want to ensure that data rows do not change after a transaction has read them, you need to run at a higher isolation — REPEATABLE READ. (See SET ISOLATION in the manual for the details. (Beware of the nomenclature for SET TRANSACTION; there's a section of the manual about Comparing SET ISOLATION and SET TRANSACTION and related sections.) The downside to using SET ISOLATION TO REPEATABLE READ (or SET TRANSACTION ISOLATION LEVEL SERIALIZABLE) is that the extra locks needed reduce concurrency — but give you the best guarantees about the state of the database.

Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278