I have a situation involving a large point-of-sale / reservations system I run. Customers have running balances which are updated on a trigger when a purchase transaction occurs. That's to say that there are triggers on insert, update and delete on a table units
which stores purchased units, and each of these triggers sums the customer's purchases and payments and updates a table called customer_balance
. Keeping these balances updated without having to calculate them through joins constantly is essential. The business logic also dictates that sometimes multiple inserts or updates to units
pertaining to a single customer can be sent asynchronously. When this happens it can lead to a deadlock where two triggers try to act on the same row in customer_balance
. To prevent this, and since you can't put transactions in a trigger, there are a handful of very quick updates for which I call LOCK TABLES units WRITE, customer_balance WRITE
, then do the purchase and let the trigger run, and then immediately UNLOCK
. This occurs quickly and sporadically enough that it doesn't affect performance even during busy periods.
However. There are certain quarterly and annual reports that need to run extensive, very long SELECT
s on these same tables, some as long as 30 seconds. When running the reports, I SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
. This seems to have helped mitigate a lot of other issues with the reports preventing implicit or row-level locks. But they still prevent the explicit LOCK TABLES
calls. And so when these reports are run during peak hours, checkout transactions can freeze for the entire time the report is running.
My question is: Beyond setting isolation level to READ UNCOMMITTED
, is there any way to prevent a long-running SELECT
from blocking an explicit write lock?