0

I have a PostgreSQL 9.2 database using pgBouncer connection pool in a debian server. In that database, regular users performs queries over a few tables and I have a cron process fetching data and inserting in a table (using a pg/plsql function which makes some validations before insert). The problem I have is that when I have a huge load on the cron processes (many inserts), the table get locked and the queries to this table does not respond (or takes a lot of time to respond). Is there any way to set priorities by stored procedure, database user(cron and queries use different database users) or by type (select has higher priority than insert). If there is no way to user priority definition in postgreSQL, is there any workaround? Inserts can wait, but the user queries should not...

The cron process creates and drops a pgbouncer connection per insert. If I use the same connection, the problem is bigger (the queries takes even longer)

Thanks in advance, Claudio

  • What isolation level are you using? What is going on inside that plpgsql function? Can you catch a lock in the act, using pg_locks view (https://wiki.postgresql.org/wiki/Lock_Monitoring)? – jjanes Jan 02 '14 at 23:42
  • I did not specify isolation level or transaction anywhere, i create the connection in php, execute "select * from sp_insert_event(params)" and i close the connection. executing "SELECT current_setting('transaction_isolation')" i get "read committed". Do you think i should change the isolation level? I tried to analyze pg_locks before, but i could not find a query with details working on v9.2 Basically, the function inserts data to a "places" table, it checks if the place already exists, categorizes it, and calculates an "scoring" value. Thanks in advance, Claudio – Lamis IntheSky WithDiamonds Jan 03 '14 at 16:54
  • 1
    In read committed mode, simple inserts should never take locks that would block selects. If you can't spot any active locks, then most likely that is not the problem. Perhaps the inserts are just overloading your disks to the point they can't function effectively. – jjanes Jan 03 '14 at 17:34
  • I was able to spot active locks (2 max) using a query from the page you shared (wiki.postgresql.org/wiki/Lock_Monitoring), but it didn't show me relevant information (just the function containing the select statement). I also get a 60+ % of cpu usage and 12k IO blocks/sec while massive inserting. Do you think that the problem is that the inserts uses too much resources instead of the table lock problem? Thanks in advance, Claudio – Lamis IntheSky WithDiamonds Jan 03 '14 at 18:11
  • I don't know if an "IO block" is a PostgreSQL block (usually 8kB), or a disk block (usually 512 bytes). Either way, I think that 12k of them per second is a lot if the writes are largely random, rather than sequential ; so that is likely the cause. Can your insert code apply some throttling? – jjanes Jan 08 '14 at 20:17

0 Answers0