I’m really new to relational databases. I’m working on a project that involves finances and so I want any actions that affect balance not to occur at the same time and I want to achieve that using locks, however I’m not sure how to use them. Vision I have now: I want to have a separate table for each action and a balance field in users table, value of which would be derived from all the relevant tables. That being sad I’m never actually going to update existing records - only adding them. I want to make sure only one record for each user is being inserted at a time in these tables. For instance: 3 transactions occur at the same time and so 3 records are about to be added to any relevant tables. Two of the records have the same userid, a foreign key to my users table, and the other one has a different one. I want my records with the same foreign keys to be pipelined, and the other one can be done whenever. How do I achieve this? Are there any better ways to approach this?
2 Answers
I want any actions that affect balance not to occur at the same time
Why?
I want to achieve that using locks
Why?
To give you a counter example. Let's say you want to avoid having negative account balances. When a user withdraws 500$, how can you model that without locks.
UPDATE accounts
SET balance = balance - 500
WHERE accountholderid = 42
AND balance >= 500
This works without any explicit locks and is safe for concurrent access. You will have to check the update count, if it is 0 you would have overdrawn the account.
(I'm aware MySQL will still acquire a row lock)
It still makes sense to have a ledger but even there the need for locks is not obvious to me.

- 4,452
- 1
- 34
- 52
-
I’ve been told that storing balance as an ordinary number is a bad practice. Why storing it when it can be derived from the data I record anyways? I need to make sure no transactions involving 51%+ of balance occur at the same time. Otherwise derived balance for that user would end up being negative. – zabbir Apr 20 '19 at 15:59
-
Have you been given a reason as to why that is? What is the maximum number of records that you will have to aggregate to determine the account balance? What is your definition of "at the same time"? How would the balance become negative in the example code I gave? – Philippe Marschall Apr 20 '19 at 19:08
-
No, I haven’t been given a reason, but it makes sense to me. The maximum number of records is unlimited. At the same time for me is if my node app starts 2 distinct tasks roughly at the same time, when one hasn’t affected the database before the start of the other so both use the same derived value for user’s balance and each potentially can create new record that uses up 51%+ of the balance. – zabbir Apr 21 '19 at 01:58
-
If the maximum number of records is unlimited then you need to sum up an unlimited amount of records to derive the the account balance. You see how this is a problem? Again, the query shown avoids this issue and also avoids the issue with three "concurrent" transactions. – Philippe Marschall Apr 21 '19 at 07:42
-
I’m aware of the concurrency and performance decrease. However, isn’t security more important? I would rather exclude the possibility of an error in a query that would permanently increase someone’s balance by some enormous amount and let them bankrupt your business. – zabbir Apr 21 '19 at 08:34
-
What does this have to do with security? How is an `UPDATE` more likely to go wrong than an `INSERT`? Don't get me wrong, I believe it is a good idea to have a ledger but I am not convinced that is a good idea to sum up the whole ledger every time you want to know the account balance. In my experience you want to keep the amount of data in an OLTP system as small as possible and move data to a record keeping system as quickly as possible. So you may end up with a hybrid approach that is based on a ledger but has an opening and closing balance. – Philippe Marschall Apr 21 '19 at 22:04
- Use
ENGINE=InnoDB
for all your tables. Use transactions:
BEGIN; do all the work for a single action COMMIT;
The classic example of a single action is to remove money from one account and add it to another account. The removing would include a check for overdraft, in which case you would have code to ROLLBACK
instead of COMMIT
.
The locks you get assure that everything for the single action is either completely done, or nothing at all is done. This even applies if the system crashes between the BEGIN
and COMMIT
.
Without begin and commit, but with autocommit=ON, each statement is implicitly surrounded by begin and commit. That is the UPDATE
example in a previous answer is 'atomic'. However, if the money deducted from the one account needs to be added to another account, what will happen if a crash occurs just after the UPDATE
? The money vanishes. So, you really need
BEGIN;
if not enough funds, ROLLBACK and exit
UPDATE to take money from one account
UPDATE to add that money to another account
INSERT into some log or audit trail to track all transactions
COMMIT;
Check after each step -- ROLLBACK and take evasive action on any unexpected error.
What happens if 2 (or more) actions happen at the "same time"?
- One waits for the other.
- There is a deadlock and a ROLLBACK is forced on it.
But, in no case, will the data be messed up.
A further note... In some cases you need FOR UPDATE
:
BEGIN;
SELECT some stuff from a row FOR UPDATE;
test the stuff, such as account balance
UPDATE that same row;
COMMIT;
The FOR UPDATE
says to other threads "Keep your hands off this row, I'm likely to change it; please wait until I am finished." Without FOR UPDATE
, another thread could sneak in and drain the account of the money you thought was there.
Comments on some of your thoughts:
- One table is usually sufficient for many users and their account. It would contain the "current" balance for each account. I mentioned a "log"; that would be a separate table; it would contain a "history" (as opposed to just the "current" info).
FOREIGN KEYs
are mostly irrelevant in this discussion. They serve 2 purposes: Verify that another table has a row that should be there; and implicitly create anINDEX
to make that check faster.- Pipelining? If you are not doing more than a hundred 'transactions' per second, the
BEGIN..COMMIT
logic is all you need to worry about. - "Same time" and "simultaneously" are misused terms. It is very unlikely that two users will hit the database at the "same time" -- consider browser delays, network delays, OS delays, etc. Plus the fact that most of those steps force activity to go single-file. The network forces one message to get there before another. Meanwhile, if one of your 'transactions' takes 0.01 second, who cares if the "simultaneous" request has to wait for it to finish. The point is that what I described will force the "wait" if needed to avoid messing up the data.
All that said, there still can be some "at the same time" -- If transactions don't touch the same rows, then the few milliseconds it takes from BEGIN
to COMMIT
could overlap. Consider this timeline of two transactions that came in almost simultaneously:
BEGIN; -- A
pull money from Alice -- A
BEGIN; -- B
pull money from Bobby -- B
give Alice's money to Alan -- A
give Bobby's money to Betty --B
COMMIT; --A
COMMIT; --B

- 135,179
- 13
- 127
- 222