7

I have a deployment with a defined number of replicas. I use readiness probe to communicate if my Pod is ready/ not ready to handle new connections – my Pods toggle between ready/ not ready state during their lifetime.

I want Kubernetes to scale the deployment up/ down to ensure that there is always the desired number of pods in a ready state.

Example:

  • If replicas is 4 and there are 4 Pods in ready state, then Kubernetes should keep the current replica count.
  • If replicas is 4 and there are 2 ready pods and 2 not ready pods, then Kubernetes should add 2 more pods.

How do I make Kubernetes scale my deployment based on the "ready"/ "not ready" status of my Pods?

Gajus
  • 69,002
  • 70
  • 275
  • 438
orirab
  • 2,915
  • 1
  • 24
  • 48

3 Answers3

0

I don't think this is possible. If pod is not ready, k8 will not make it ready as It is something which releated to your application.Even if it create new pod, how readiness will be guaranted. So you have to resolve the reasons behind non ready status and then k8. Only thing k8 does it keep them away from taking world load to avoid request failure

  • Kubernetes will make it ready as soon as the health check says so, typically when the monitored endpoint returns the correct status code. – Roy Feb 04 '21 at 13:28
-1

Ensuring you always have 4 pods running can be done by specifying the replicas property in your deployment definition:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 4  #here we define a requirement for 4 replicas
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

Kubernetes will ensure that if any pods crash, replacement pods will be created so that a total of 4 are always available.

pdbrito
  • 552
  • 8
  • 13
  • 2
    This it not how it works. Kubernetes only takes care that there are 4 Pods in total in this case, but it does not ensure that always for are ready. – svenwltr Feb 07 '19 at 09:10
  • 2
    but my pods don't crash, they're simply `not ready` for a bit and then recover. I also don't want kubernetes to crash them, just to create new ones. – orirab Feb 07 '19 at 10:57
-3

You cannot schedule deployments on unhealthy nodes in the cluster. The master api will only create pods on nodes which are healthy and meet the quota criteria to create any additional pods on the nodes which are schedulable.

Moreover, what you define is called an auto-heal concept of k8s which in basic terms will be taken care of.

Raunak Jhawar
  • 1,541
  • 1
  • 12
  • 21