I have a flask application which connects to a postgres database. Between the flask server and the database I add a pooler connection (pgbouncer). My flask application is multithreading, so I cannot use a single connection between the database and the flask application. So I create a new one for each request. The problem is that when I send a large number of simultaneous requests (300) the response time becomes too long, while debugging my code I saw that it is the connection time of psycopg2 which increases
Here is my pgbouncer configuration:
- DB_HOST=psql
- POOL_MODE=transaction
- MAX_CLIENT_CONN=10000
- DEFAULT_POOL_SIZE=2
- MIN_POOL_SIZE=2
- ADMIN_USERS=postgres
- LOG_CONNEXIONS=0
- LOG_DISCONNECTIONS=0
Here is my connection function in my flask application, which is called on each new call of an endpoint of my application.
def getConn(self):
conn = psycopg2.connect(user=dbUser,
password=dbPassword,
host=dbHost,
port=dbPort,
database=dbDatabase
)
conn.autocommit = True
cursor = conn.cursor()
cursor.execute("SET search_path TO ecosystem")
return conn, cursor