1

I have a flask application which connects to a postgres database. Between the flask server and the database I add a pooler connection (pgbouncer). My flask application is multithreading, so I cannot use a single connection between the database and the flask application. So I create a new one for each request. The problem is that when I send a large number of simultaneous requests (300) the response time becomes too long, while debugging my code I saw that it is the connection time of psycopg2 which increases

Here is my pgbouncer configuration:

- DB_HOST=psql
- POOL_MODE=transaction
- MAX_CLIENT_CONN=10000
- DEFAULT_POOL_SIZE=2
- MIN_POOL_SIZE=2
- ADMIN_USERS=postgres
- LOG_CONNEXIONS=0
- LOG_DISCONNECTIONS=0

Here is my connection function in my flask application, which is called on each new call of an endpoint of my application.

def getConn(self):
    conn = psycopg2.connect(user=dbUser,
                            password=dbPassword,
                            host=dbHost,
                            port=dbPort,
                            database=dbDatabase
                            )
    conn.autocommit = True
    cursor = conn.cursor()
    cursor.execute("SET search_path TO ecosystem")
    return conn, cursor
mmmmmmm
  • 67
  • 1
  • 7
  • 1
    You could try to keep a fixed number of connections open (eg 50 min/max). Opening the connections is expensive, keeping them open is not much effort. – Maurice Meyer Jun 28 '21 at 12:13
  • 1
    What you will need to do here is carefully check the logs on pgbouncer, to see exactly why there is a delay. You will want to log more details of course. There simply isn't enough information here for anyone to tell what the problem is. – Richard Huxton Jun 28 '21 at 12:26
  • If the connection time is increasing over time, then it may be that the pool is fully utilised, so endpoints have to wait for a connection to become free. Increasing pool size _might_ help, but you should also check that endpoints are not too slow. – snakecharmerb Jun 28 '21 at 12:38
  • Thank you for your answers. I can't keep multiple connections because multiple threads can't use the same connection at the same time, otherwise it gives me a concurrency error. I also noticed that the time for each query gets longer and longer depending on the number of requests. For example I send 1000 request, the connection to the db for request 1 is 0.02 seconds and all the queries of my request take 0.08 seconds, is perfect. But for the 500th request the connection time is 2s and queries time is 2.5 seconds... – mmmmmmm Jun 28 '21 at 14:38

0 Answers0