2

I have some doubts with respect to transactions and isolation levels:

1) In case the DB transaction level is set to Serializable / Repeatable Read and there are two concurrent transactions trying to modify the same data then one of the transaction will fail.

In such cases, why DB doesn't re-tries the failed operation? Is it a good practice to retry the transaction on application level (hoping the other transaction will be over in mean time)?

2) In case the DB transaction level is set to READ_COMMITTED / DIRTY READ and there are two concurrent transactions trying to modify the same data then why the transactions don't fail?

Ideally we are controlling the read behaviour and concurrent writes should not be allowed.

3) My application has 2 parts and uses the spring managed datasource in one part and application created datasource in other part (this part doesn't use spring and data source is explicit created by passing the properties).

My assumption is that isolation level has no impact - from which datasource the connections is coming from...two concurrent transactions even if coming from different datasource will behave the same based on isolation level as if they are coming from same datasource.

Do you see any issue with this setup? Should we strive for single datasource across application?

Kumar
  • 1,536
  • 2
  • 23
  • 33

1 Answers1

1

I also wait until others to give their feed backs. But now i would like to give my 2 cents to this post. As you explained isolation's are work differently each.

I'll try to keep a sample data set as follows

IF OBJECT_ID('Employees') IS NOT NULL DROP TABLE Employees
GO

CREATE TABLE Employees (
    Emp_id INT IDENTITY,
    Emp_name VARCHAR(20),
    Emp_Telephone VARCHAR(15),
)

ALTER TABLE Employees
ADD CONSTRAINT PK_Employees PRIMARY KEY (emp_id)


INSERT INTO Employees (Emp_name, Emp_Telephone)
    SELECT 'Satsara', '07436743439'
INSERT INTO Employees (Emp_name, Emp_Telephone)
    SELECT 'Udhara', '045672903'
INSERT INTO Employees (Emp_name, Emp_Telephone)
    SELECT 'Sithara', '58745874859'

REPEATABLE READ and SERIALIZABLE are both very close to each, but SERIALIZABLE is the heights in the isolation. Both options are provided for avoid the dirty readings and both need to manage very carefully because most of the time this will cause for deadlocks due to the way that it handing the data. If there's a deadlock, definitely server will wipe out one transaction from the picture. So it will never run it by the server again due to it doesn't have any clue about that removed transaction, unless a log.

REPEATABLE READ - Not allow to modify (lock records) any records which is already read by another process (another query). But it allows for new records to insert (without a lock) which can be impact to your system while querying.

SERIALIZABLE - Different in Serializable is, its not allow to insert records with
"SET TRANSACTION ISOLATION LEVEL Serializable". So INSERT processors are wait until the previous transaction commit.

Usually REPEATABLE READ and SERIALIZABLE isolation's are keep data locks than other two options.

example [REPEATABLE and SERIALIZABLE]:

In Employee table you have 3 records.

enter image description here

  1. Open a query window and run (QUERY 1)
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
BEGIN TRAN
SELECT * FROM Employees;
  1. Now try to run a insert query in a different window (QUERY 2)
INSERT INTO Employees(Emp_name, Emp_Telephone)
    SELECT 'JANAKA', '3333333'
  1. System allow to insert the new record in QUERY 2 and now run the same query2 again and you can see 4 records.

  2. Now replace the Query 1 with following code and try the same process to test the Serializable

SET TRANSACTION ISOLATION LEVEL Serializable
BEGIN TRAN
SELECT * FROM Employees;
  1. This time you can see the that 2nd Query insert command not allow to execute and wait until the Query 1 to commit.

  2. Once Query 1 committed only, Query 2 allows to execute the INSERT command.

When compare the Read Committed and the Read Uncommitted,

READ COMMITTED - Changes to the data is not visible to other processors until it commit the records. With Read Committed. it puts shared locks for all the records it reads. If another process found a exclusive lock by, it wait until its lock release.

READ UNCOMMITTED - Not recommended and garbage data can read by the system due to this. (in SQL Server nolock). So this will return the uncommitted data. "Select * from Employee (nolock)

**DEADLOCKS - ** Whether its Repeatable read, Serializable, READ COMMITTED or READ UNCOMMITTED, it can creates dead locks. Only things is as we discussed Repeatable read and Serializable are more prone to deadlocks than other two options.

Note: If you need sample for Read Committed and Read Uncommitted, please let know in the comment section and we can discuss.

Actually this topic is very large topic and need to discuss with lots of samples. I do not know this explanation is enough or not. But i gave a small try. NO idea ware to start and when to stop.

At the same time, you asked about " Is it a good practice to retry the transaction on application level "

In my opinion that's fine. Personally i also do retrying process in some sort of a situations. Different techniques used.

  1. Keeping a Flag field to identify it updated or not and retry
  2. Using a Event driven solution such RabitMQ, KAFKA.
isatsara
  • 2,065
  • 1
  • 11
  • 11