3

Since I'm using CONN_MAX_AGE: 300 on my Django servers, request fail with errors because PostgreSQL exceeds the max_connections limit (which is 100 by default).

What is the best strategy to solve this? I've tried using pgpool2, but this didn't solve the problem at all. Now the connections were queued by pgpool 2 (letting the sites wait forever, and cause a gateway timeout in the end).

I expected that using pgpool would reduce the number of idle connections going to PostgreSQL, not cause the same issue again.

These are the settings I've used:

pgpool2:

num_init_children = 32 # are so many workers needed? max_pool = 10 # default is 4

postgres:

max_connections = 400 # upgraded from default 100

uWSGI/Django:

  • all workers have 20 threads.
  • there are 10 workers in total with all sites combined.

The VPS is an 8core Linode @ 2.27GHz with 2GB ram.

vdboor
  • 21,914
  • 12
  • 83
  • 96
  • Did you ever solve this @veboor? I am facing a very similar problem. – Hans Kristian Mar 19 '15 at 14:56
  • Nope, I've gone for pgbouncer instead and you can also use django-postgrespool. PgBouncer requires users to be added in a `userlist.txt` file, but that is no longer an issue since I let deploy scripts perform that for each site. – vdboor Mar 25 '15 at 17:40

1 Answers1

0

I know this was a year ago, but were you using Gunicorn? This pull request explains that asynchronous workers will not reuse connections and the issue you were/are having seemed to be solved by switching to sync workers.

grokpot
  • 1,462
  • 20
  • 26
  • Thanks for pointing to that issue. I've been on Gunicorn for a while, but switched to uWSGI. IIRC I was already using uWSGI back then. – vdboor Sep 28 '15 at 14:02