1

I have this online shop that I built a while ago and I've been using LOCK TABLES on the product stock tables to make sure stock doesn't go below 0 or isn't update properly.

Lately there's been a lot of traffic and lots of operations (some other LOCK TABLES too) on the database so I have been asking myself if there was a faster and safer way to keep the data consistent.

I'm trying to understand transactions and how they work and I'm wondering if the following logic would successfully apply to my checkout process:

1) START TRANSACTION (so that any following query can be rolled back, if one of them fails or some other condition like stock is lower than needed)

2) insert customer in the 'customers' table

3) insert order info in the 'orders' table

4) similar queries to different tables

5) for each product in the cart:

5.1) update products set stock = stock - x where stock - x >= 0 (x is whatever units the customer wants to buy)

5.2) check affected rows and if affected rows == 0 then ROLLBACK and exit (not enough stock for the current product so cancel the order/throw an error)

5.3) some other queries... etc..

6) COMMIT

Does that sound correct?

What I don't get (and don't know if I should be concerned about in the first place) is what happens to the 'products' table and the stock figures, if some concurrent session (another customer) tries to place the same order or an order containing some of the same products.

Would the second transaction wait for the first one to be finished and the second transaction use the latest stock figures or would they both run concurrently and probably both fail in some cases or what?

Kara
  • 6,115
  • 16
  • 50
  • 57
teomor
  • 144
  • 1
  • 8

1 Answers1

2

Your workflow is correct, even though I would take step 2 (save customer's details) out of the transaction: you probably want to remember your customer, even if the order couldn't be placed.

When a row is updated within a transaction, the row (or in some unfortunate cases, the whole table) becomes locked in exclusive mode until the end of the transaction. It menas that a simultaneous attempt to place an order on the same product would be put on hold when trying to update the stock.

When the first transaction is committed (or rolled back), the lock is released, and a concurrent update would be updating the new value.

Recommended reading: this manual chapter, in full, and this page in particular. Yes, it's a lot (but don't worry if you don't understand everything at the beginning -- the rabbit hole is very, very deep)

RandomSeed
  • 29,301
  • 6
  • 52
  • 87
  • Right. So, all the necessary rows (from all the necessary tables) will be locked until the end of the transaction, but the transactions themselves will be run one after another, just like simple atomic queries? So there is no chance of a deadlock, like, for instance, the 1st transaction locks some product rows and 2nd transactions locks different product rows, and they both wait for the other to relase the lock? – teomor Aug 05 '13 at 12:48
  • Yes, there always is a risk of deadlock *unless resources are always locked in the same order* (and it is typically **not** the case). However, the end result, *if no deadlock occurs*, is the same as if the transactions were sequential (not simultaneous). – RandomSeed Aug 05 '13 at 13:17
  • So I should modify my workflow to update the product rows in a certain order, as simple as ordering by the primary ID, just to make sure? – teomor Aug 05 '13 at 13:57
  • This is an interesting attempt to (try to) avoid deadlocks, really. However, deadlocks sometimes happen at relly unexpected places, so your application should be prepared to handle them when they occur. The typical approach is retrying a couple of times before rolling back the transaction and returning the error to the user. – RandomSeed Aug 05 '13 at 14:50
  • Are you trying to tell me that there is no way to avoid deadlocks? – teomor Aug 05 '13 at 16:53
  • Just trying to say that it is extremely difficult to avoid deadlocks with 100% certainty, even in seamingly simple situations. Locking resources always in the same order is the formal solution to avoid deadlocks in theory, but it is very tricky to achieve in practive. – RandomSeed Aug 05 '13 at 22:44
  • As the [MySQL manual puts it](http://dev.mysql.com/doc/refman/5.6/en/innodb-deadlocks.html) (and I concur): "Deadlocks (...) are are not dangerous unless they are so frequent that you cannot run certain transactions at all. Normally, you must write your applications so that they are always prepared to re-issue a transaction if it gets rolled back because of a deadlock." – RandomSeed Aug 05 '13 at 22:45
  • Right. I just thought of a scenario where a deadlock may occur even if I lock the product records in the same order.. I think.. Or maybe not.. I'm not sure I get this. How it really works, step by step I mean... – teomor Aug 08 '13 at 10:37