4

We have a setup where we are running 6 PgBouncer processes and our performance benchmarks degrade linearly with time. The longer PgBouncer has been running, the longer the connections to Postgres exist results in slower response times for the benchmark. We have a multi-tenant schema separated database with 2000+ relations. We are configured for Transaction Mode pooling right now. Over time, we see the memory footprint of each Postgres process climb and climb and climb, and again, this results in poorer performance.

We have tried to be more aggressive in cleaning up idle connections with the following settings:

min_pool_size=0

server_idle_timeout=30 (seconds)

server_lifetime=120 (seconds)

The problem we see is that invariably we still end up with lots of idle connections in Postgres. When we monitor PgBouncer with "show pools", we see sv_idle count going up and down so we are assuming that the settings are working in PgBouncer, but this is not translating to fewer idle connections in Postgres. It's as if PgBouncer is really not terminating the sessions to Postgres.

I have looked around quite a bit for a solution and tested several different options - but have not been able to get any where. I read elsewhere about cron jobs to remove idle connections from Postgres but I really do not want to do that in Production and would rather have PgBouncer clear these idle connections out fully.

We are on Postgres 9.6 and using PgBouncer version 1.15

Any help is appreciated.

/**

should have pointed this out in the original comment - we have multiple PgBouncers running and we are using Unix Sockets to do this. We are not sure if this is having an impact on Postgres leaving connections behind.

**/

Thanks

user1442883
  • 73
  • 1
  • 6
  • How do you measure the memory consumption? Do you have any extensions installed? Can you provide `EXPLAIN (ANALYZE, BUFFERS)` output for a fast and a slow execution? – Laurenz Albe May 24 '21 at 04:09
  • Thanks for the comment - the point of our issue isn't really about fixing Postgres process handling and memory consumption, it's about removing connections in an IDLE state when we have set aggressive settings on PgBouncer to do that and Postgres continues to hold on to the connections. To answer your question though - we are doing this at the linux level summing up Pss in smaps for each PID. – user1442883 May 24 '21 at 13:30
  • Are you using transaction or session pooling mode? About the memory: you made sure that the increased memory consumption is process private memory, nit shared memory? What is the `state` of these connections in `pg_stat_activity`? – Laurenz Albe May 25 '21 at 06:01

1 Answers1

2

The issue is resolved.

The application was extremely chatty and even with server_idle_timeout set as low as 5 seconds, the connections were not getting recycled on the Postgres side.

The issue we had was that server_lifetime was accidentally commented when we thought it was active and once we changed that, we could clearly see that Postgres connections were getting recycled every 2 minutes (based on our settings).

The increased memory of each connection over time especially for long-lived connections was only taking into consideration private memory and not shared memory. What we observed was the longer the connection was alive, the more memory it consumed. We tried setting things like DISCARD ALL for reset_query and it had no impact on memory consumption. Based on my research online, we were not the only to ones to face this challenge with pooling connections.

Thanks for the comments and the help. Our solution in the end was to leverage server_lifetime in pgBouncer to control the number of long-lived connections on Postgres.

-Mayan

user1442883
  • 73
  • 1
  • 6
  • You should Accept the above answer. That helps future questioners having the same issue identify potential solutions. – Belayer May 27 '21 at 18:24