4

I need to solve a resource reservation pattern with Spring and MariaDB. The problem is very simple, I have a guest table where I store guest names of events, I have to be sure that the guest count for the event must be less or equals the maximum capacity.

This is the table:

create table guest(
    event int,
    name varchar(50)
)
create index event on guest (event);

What is the right lock procedure and isolation level for DB? Please consider that this code will run in multi-threading container. I chose to lock the table with a "SELECT...FOR UPDATE" to limit the lock only in one event rows.

// START TRANSACTION
@Transactional 
public void reserve(int event, String name){
    getJdbc().query("SELECT * FROM guest WHERE id=? FOR UPDATE",event);
    Integer count=getJdbc().queryForObject("SELECT COUNT(*) FROM guest WHERE id=?",Integer.class,event);
    if(count>=MAX_CAPACITY)
        throw new ApplicationException("No room left");
    getJdbc().query("INSERT INTO guest VALUES (?,?)",event,name);
}
// COMMIT

I made some test and seems that I need the READ_COMMITTED isolation levels, am I right? This is what I found: enter image description here

This is the first time I have to change the isolation level and I'm a bit surprised of this need, can you confirm that the standard MariaDB isolation level REPETABLE_READ fails with this pattern?

Tobia
  • 9,165
  • 28
  • 114
  • 219
  • Does Spring add the `START` and `COMMIT`? Does it do `ROLLBACK` when it `throws`? – Rick James Oct 26 '18 at 18:12
  • 2
    Nice graphic and explanation of the problem. – Rick James Oct 26 '18 at 18:22
  • @RickJames, yes if the transaction manager is enabled with Transactional annotation (here you could also specify the isolation level and the rollback policies for exceptions) – Tobia Oct 27 '18 at 06:03
  • What is "transaction manager" part of? – Rick James Oct 27 '18 at 16:46
  • It is a Spring component, it uses cglib (or other alternatives) to intercept the transactional method entrance and exit managing db transactions https://docs.spring.io/spring/docs/4.2.x/spring-framework-reference/html/transaction.html – Tobia Oct 30 '18 at 07:35

2 Answers2

1

The problem is, that during the transaction in Thread 2, repeatable_read guarantees that you see the DB in the state as it was at the transaction start. So effects of transaction 1 which has not been completed yet at that time, will be hidden. Therefore you will always see the same number of records independent on what other transactions meanwhile did. So both transactions will insert a record.

READ_COMMITTED means according to the documentations: "Each consistent read, even within the same transaction, sets and reads its own fresh snapshot". Fresh snapshot means, that the results of committed concurrent transactions will be included.

aschoerk
  • 3,333
  • 2
  • 15
  • 29
  • I'm a little surprised about the default REPETABLE_READ settings, this means that most lock pattern fails, consider the same scenario with the invoce numbering (get MAX value from issued invoices number)... i really cannot think any concurrency/lock patter suitable with REPETABLE_READ isolation. – Tobia Oct 26 '18 at 17:34
  • Normally you want to avoid locks, wenn using MVCC. Optimistic locking also is a good pattern to use in that case. If less records get updated, than expected, you know, that something happened in parallel. – aschoerk Oct 26 '18 at 18:08
0

A suggestion for working around the problem. This involves keeping a counter instead of doing COUNT(*). (Yes, this violates the principle of not having redundant info.)

CREATE TABLE EventCount ( event ..., ct INT ..., PRIMARY KEY(event) ) ENGINE=InnoDB;

START TRANSACTION;
    INSERT ...;
    UPDATE EventCount
        SET ct = ct + 1
        WHERE event = ?;
    ct = SELECT ct FROM EventCount WHERE event = ?;
    if (ct > max)
    {
        ROLLBACK;
        exit;
    }
COMMIT;

(Caveat: I have not verified that this works for your situation.)

Rick James
  • 135,179
  • 13
  • 127
  • 222
  • I think that we have the same problem here. If two transactions started together with the default REPETABLE_READ isolation, they take like a snapshot of db data and they did not interfere each other. Then they will not find a count overflow because they did not count the record of other transactions, and if there is a concurrency on the last count, at the end of both transaction I will find a counter greater the the max. I guess the default REPETABLE_READ is not suitable at all for this common concurrency problem. – Tobia Oct 27 '18 at 06:11
  • @Tobia - Note that my suggestion will (I think) block on an exclusive lock on `EventCount` regardless of isolation level. The block may be resolved by delaying one transaction. But then the `if` forces a `ROLLBACK` on the delayed transaction. – Rick James Oct 28 '18 at 18:13
  • If the lock is inside the transaction, the problem is that the "SELECT count" will count only data from this transaction, and not from other commited transaction due to isolation level. It is the same problem of my question: if you are isolating with REPETABLE_READ the waiting thread cannot see data commited from other thread that previously got the lock. – Tobia Oct 29 '18 at 12:02