0

I have an API to which I send requests, and this API connects to MongoDB through MongoClient in PyMongo. Here is a scheme of my system that I deployed in GKE:

enter image description here

The major part of the calculations needed for each request are made in the MongoDB, so I want the MongoDB pods to be autoscaled based on CPU usage. Thus I have an HPA for the MongoDB deployment, with minReplicas: 1.

When I send many requests to the Nginx Ingress, I see that my only MongoDB pod has 100% CPU usage, so the HPA creates a second pod. But this second pod isn't used.

After looking in the logs of my first MongoDB pod, I see that all the requests have this : "remote":"${The_endpoint_of_my_API_Pod}:${PORT}", and the PORT only takes 12 different values (I counted them, they started repeating so I guessed that there aren't others).

So my guess is that the second pod isn't used because of sticky connections, as suggested in this answer https://stackoverflow.com/a/73028316/19501779 to one my previous questions, where there is more detail on my MongoDB deployment.

I have 2 questions :

  • Is the second pod not used in fact because of sticky connections between my API Pod and my first MongoDB Pod?
  • If this is the case, how can I overcome this issue to make the autoscaling effective?

Thanks, and if you need more info please ask me.

EDIT

Here is my MongoDB configuration:

Its Dockerfile, from which I create my MongoDB image from the VM where my original MongoDB is. A single deployment of this image works in k8s.

FROM mongo:latest
EXPOSE 27017
COPY /mdb/ /data/db

The deployment.yml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongodb
  labels:
    app: mongodb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      containers:
      - name: mongodb
        image: $My_MongoDB_image
        ports:
        - containerPort: 27017
        resources:
          requests:
            memory: "1000Mi"
            cpu: "1000m"
      imagePullSecrets:   #for pulling from my Docker Hub
      - name: regcred

and the service.yml and hpa.yml:

apiVersion: v1
kind: Service
metadata:
  name: mongodb-service
  labels:
    app: mongodb
spec:
  selector:
    app: mongodb
  ports:
    - protocol: TCP
      port: 27017
      targetPort: 27017
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
    name: mongodb-hpa
spec:
    scaleTargetRef:
      apiVersion: apps/v1
      kind: Deployment
      name: mongodb
    minReplicas: 1
    maxReplicas: 70
    targetCPUUtilizationPercentage: 85

And I access to this service from my API Pod with PyMongo:

def get_db(database: str):
    client = MongoClient(host="$Cluster_IP_of_{mongodb-service}",
                         port=27017,
                         username="...",
                         password="...",
                         authSource="admin")
    return client.get_database(database)

And moreover, when a second MongoDB Pod is created thanks to autoscaling, its endpoint appears in my mongodb-service:

  • the HPA created a second Pod enter image description here

  • the new Pod endpoints appears in the mongodb-service enter image description here

JujuPat
  • 36
  • 4
  • Can you share details of the MongoDB Service ? – boredabdel Aug 18 '22 at 09:12
  • I've added details in the EDIT @boredabdel , and as I showed here https://stackoverflow.com/questions/73021063/k8s-service-doesnt-use-autoscaled-mongodb-pods, when I delete the initial MongoDB Pod, the others start being used – JujuPat Aug 18 '22 at 10:09
  • This has to do more with the client and not the MongoDB or k8s. When you autoscale your MongoDB deployment, the service is updated, you can see that the endpoints object get updated with the list of IP:Port of the new MongoDB pods. The client has to know how to connect/reconnect to these new pods. I think it's some sort of multiplexing config on the client side but i'm not that familiar with the MongoDB python libraries – boredabdel Aug 18 '22 at 12:05

0 Answers0