0

I have a django app running on kubernetes with postgrsql / pgbouncer

I encounter the following error on some requests when I run 2 Django replicas (this doesn't seem to happen if I only set 1 replica)

I only have one database

Django side

django.db.utils.OperationalError: ERROR: no more connections allowed (max_client_conn)

PGBouncer side

1 WARNING C-0x55d76aeb6330: (nodb)/(nouser)@xx.xx.xx.xx:37002 pooler error: no more connections allowed (max_client_conn)

I have the following settings

postgres


 containers:
    - name: acapela-cloud-database
        image: postgres:12.0
        imagePullPolicy: Always
        ports:
            - containerPort: 5432 

        args:
            - postgres
            - -c
            - max_connections=400

 SHOW max_connections;
 max_connections
-----------------
 400
(1 row)

pgbouncer yaml file (kubernetes)


apiVersion: apps/v1
kind: Deployment
metadata:
  name: pgbouncer
  labels:
    app: pgbouncer
spec:
  selector:
    matchLabels:
      app: pgbouncer
  template:
    metadata:
      labels:
        app: pgbouncer
    spec:
      containers:
        - name: pgbouncer
          image: edoburu/pgbouncer:1.9.0
          imagePullPolicy: Always
          ports:
            - containerPort: 5432

          env:
            - name: DB_HOST
              value: "database"
            - name: DB_PORT
              value: "5432"
            - name: DB_USER
              value: ""
            - name: DB_PASSWORD
              value: ""
            - name: DB_DATABSE
              value: ""
            - name: PGBOUNCER_LISTEN_PORT
              value: "6432"
            - name: PGBOUNCER_MAX_CLIENT_CONN
              value: "800"
            - name: PGBOUNCER_DEFAULT_POOL_SIZE
              value: "400"            
            - name: POOL_MODE
              value: transaction
            - name: SERVER_RESET_QUERY
              value: DISCARD ALL
          livenessProbe:
            tcpSocket:
              port: 5432
            periodSeconds: 60
          lifecycle:
            preStop:
              exec:
                # Allow existing queries clients to complete within 120 seconds
                command: ["/bin/sh", "-c", "killall -INT pgbouncer && sleep 120"]
          securityContext:
            allowPrivilegeEscalation: false
            capabilities:
              drop: ['all']

---

apiVersion: v1
kind: Service
metadata:
  name: pgbouncer
  labels:
    app: pgbouncer
spec:
  type: ClusterIP
  ports:
    - port: 5432
      targetPort: 5432
      protocol: TCP
      name: pgbouncer
  selector:
    app: pgbouncer


I suppose this comes for these settings but cannot figure out what to set.

Can someone give me what values would work ?

Once fixed and with this architecture (Django + PGBouncer + Postgrsql) could I launch 5/10/15... replicas and the database/bouncer will handle it?

wzxecr
  • 11
  • 4
  • What is that block for 'pgbouncer' label? It isn't in the format of a pgbouncer config file. So, what is it? – jjanes Mar 09 '23 at 18:16
  • it is the kubernetes yaml file for pgbouncer. I added the full code. – wzxecr Mar 10 '23 at 06:50
  • Have a look at `pg_stat_activity` to check which service is exhausting all your connection. – jlandercy Mar 10 '23 at 07:11
  • I tried to monitor pg_stat_activity with a script that runs every second "SELECT count(*) FROM pg_stat_activity WHERE state = 'active';" but it doesn't show high values. Will check other fields – wzxecr Mar 14 '23 at 15:33

1 Answers1

0

I found that indeed I had a lots of queries. This was because of a nested serializers

I was able to figure this out using this log function that logs the number of queries

from django.db import connection
from django.db import reset_queries

def database_debug(func):
    def inner_func(self, request, *args, **kwargs):
        reset_queries()
        results = func(self, request, *args, **kwargs)
        query_info = connection.queries
        print('function_name: {}'.format(func.__name__))
        print('query_count: {}'.format(len(query_info)))
        queries = ['{}\n'.format(query['sql']) for query in query_info]
        print('queries: \n{}'.format(''.join(queries)))
        return results
    return inner_func
@database_debug
def get(self, request, format=None):

And if this can help I reduced the number of queries from more than 800 to 4 with this

serializers.py

# replace
objects = ObjectSerializer(many=True,read_only=True)

# by
objects = serializers.SerializerMethodField('get_objects')

def get_objects(selfself, obj):
   query_set = Object.objects.all().prefetch_related('type','size','price')
   serializer = ObjectSerializer(query_set, many=True, read_only=True)
   return serializer.data
wzxecr
  • 11
  • 4