4

I have a python application that accesses a database (PostgreSQL) with a master data in order to search within it - done by a search threads (A 150 mb database).

Once a day there is an update process that updates the database (drops and loads a new table) - during the update the search threads are blocked until the update is completed.

Today it is implemented using a Read-Write lock on the application level. Now we are moving into a search processes that each one of them runs in a docker container

What is the best way to achieve this functionality in the new environment? (Using distributed locks? explicit locking on the database level?...)

I saw the Redis implementation for distributed locks, seems that they are exclusive and not RW.

sborpo
  • 928
  • 7
  • 15
  • 1
    Can you use database transactions to avoid the need for a lock – do the entire "update" sequence in a single transaction, so that individual queries either see the "before" or "after" state? – David Maze Aug 31 '20 at 01:35
  • I think its a good option that definitely will check this out (pg_dump/pg_restore). I'm not sure if i can do table drops when other services are connected to the db - so it might be a limitation – sborpo Aug 31 '20 at 05:27

0 Answers0