I have a case, where I should limit rows per user in the table. Now I am doing this via COUNT * FROM table
check before insert, and if the count equals/more than allowed, I throw an error. COUNT and INSERT query running in the single transaction.
But, on 5000 online users and 50K requests per minute, I have extra records (more than limit) in the table. Looks like a race condition on parallel inserts. How can I avoid this? Can anyone suggest some best practices?
Asked
Active
Viewed 1,859 times
1

LONGMAN
- 980
- 1
- 14
- 24
-
2"How can I avoid this? Can anyone suggest some best practices?" So you are doing `SELECT COUNT(*)...` and then a `INSERT ...` there are two options 1 rewrite SELECT and INSERT into one query like `INSERT INTO table (
) SELECT ( – Raymond Nijland Sep 24 '18 at 20:11) FROM table WHERE .. [GROUP BY ...] [HAVING COUNT(*) < 100]` 2 Use a INSERT BEFORE trigger with a SIGNAL . -
Or do a locking read (select ... for update). – Shadow Sep 24 '18 at 20:37
-
@RaymondNijland first option I did not get, how I can limit inserts depending on COUNT in one query? The second option is better. But, if you know, is it possible race condition bug between the trigger and insert? – LONGMAN Sep 25 '18 at 09:35
-
@Shadow locking on reading is not an option. The project is high load – LONGMAN Sep 25 '18 at 09:36
-
"But, if you know, is it possible race condition bug between the trigger and insert?" With a trigger you can't get a race condition. – Raymond Nijland Sep 25 '18 at 10:32
-
"first option I did not get," if you use a non-matching select you can avoid a insert like `INSERT INTO table (
) SELECT ( – Raymond Nijland Sep 25 '18 at 10:35) FROM table WHERE 0` notice the `WHERE 0` that can also work on a HAVING clause with `COUNT(*) < 100` if COUNT(*) is bigger then 100 if is the same as false or 0 making it `HAVING 0` so nothing is inserted. -
1@LONGMAN you need to decide what the bigger problem is: performance or race conditions. Locking is the generally available solution in most rdbms to handle concurrent access to data. – Shadow Sep 25 '18 at 12:50
2 Answers
1
Use a separate table which will maintain the user and the count of rows inserted. Use the userid as foreign key to the main table. Now if you have a session based application you can load the data into the session or memory and keep fetching it / updating the count after every insert in the session / memory and the database and then actually inserting into the main table.

urs_ng
- 33
- 11
-
What is the difference between your and my current solution? Its pretty same and there is a possibility of race condition between the check and insert – LONGMAN Sep 25 '18 at 09:40
-
1Again, you would have to lock the records being read to enable concurrency control. The advantage of such solution could be that you do not have to lock records in the transactional table, only in this derived table. – Shadow Sep 25 '18 at 12:53
0
The issue is called Phantom read. Typically it can be resolved by using Serializable isolation level of transaction:
https://en.wikipedia.org/wiki/Isolation_(database_systems)
But it can decrease performance. So if you have a lot of inserts than try other options from comments too.

mroman
- 1,548
- 1
- 16
- 22