2

THE PROBLEM

I'm working with PostgreSQL v10 + golang and have what I believe to be a very common SQL problem:

  • I've a table 'counters', that has a current_value and a max_value integer columns.
  • Strictly, once current_value >= max_value, I would like to drop the request.
  • I've several Kubernetes pods that for each API call might increment current_value of the same row (in the worst case) in 'counters' table by 1 (can be thought of as concurrent updates to the same DB from distributed hosts).

In my current and naive implementation, multiple UPDATES to the same row naturally block each other (the isolation level is 'read committed' if that matters). In the worst case, I have about 10+ requests per second that would update the same row. That creates a bottle neck and hurts performance, which I cannot afford.


POSSIBLE SOLUTION

I thought of several ideas to resolve this, but they all sacrify integrity or performance. The only one that keeps both doesn't sound very clean, for this seemingly common problem:

As long as the counter current_value is within relatively safe distance from max_value (delta > 100), send the update request to a channel that would be flushed every second or so by a worker that would aggregate the updates and request them at once. Otherwise (delta <= 100), do the update in the context of the transaction (and hit the bottleneck, but for a minority of cases). This will pace the update requests up until the point that the limit is almost reached, effectively resolving the bottleneck.


This would probably work for resolving my problem. However, I can't help but think that there are better ways to address this.

I didn't find a great solution online and even though my heuristic method would work, it feels unclean and it lacks integrity.

Creative solutions are very welcome!


Edit:

Thanks to @laurenz-albe advice, I tried to shorten the duration between the UPDATE where the row gets locked to the COMMIT of the transaction. Pushing all UPDATES to the end of the transaction seems to have done the trick. Now I can process over 100 requests/second and maintain integrity!

Alechko
  • 1,406
  • 1
  • 13
  • 27

1 Answers1

2

10 concurrent updates per second is ridiculously little. Just make sure that the transactions are as short as possible, and it won't be a problem.

Your biggest problem will be VACUUM, as lots of updates are the worst possible workload for PostgreSQL. Make sure you create the table with a fillfactor of 70 or so and the current_value is not indexed, so that you get HOT updates.

Laurenz Albe
  • 209,280
  • 17
  • 206
  • 263
  • You are right. The problem is the long duration of my transactions, as each transaction lasts for the entire duration of the request. I will try to create an additional transaction, just for this update, and try to think of a way to revert the update in case the main transaction (of the request) fails. – Alechko Apr 08 '19 at 07:32
  • Pushing the UPDATE to the very end of the transaction did the trick. Now I can process 100+ requests per second. Thank you. – Alechko Apr 09 '19 at 16:50