0

I have an application that I deploy on Kubernetes.

This application has 5 replicas and I'm doing a rolling update on each deployment.

This application has a graceful shutdown which can take tens of minutes (it has to wait for running tasks to finish).

My problem is that during updates,all the older version pods are stuck at "Terminating" status while all the new pods are created.

During the updates, I end up running with 8 containers and it is something I'm trying to avoid.

The behaviour I'm trying to get is that new pods will only get created after the old version pods terminated successfully, so at all times I'm not exceeding the number of replicas I set.

I wonder if there is a way to achieve such behaviour.

Akshay Gopani
  • 473
  • 4
  • 16
  • If the application is going to keep processing for half an hour when Kubernetes requests it to stop, when should it decide the application has just ignored the shutdown signal and forcibly terminate it? Can you fix the shutdown sequence so it stops in tens of _seconds_, maybe causing some jobs to get retried? – David Maze Dec 12 '21 at 12:34

2 Answers2

0

set maxSurge to 5, so the number of all pods (both terminating and creating) will not exceed 5.

h8n
  • 748
  • 5
  • 12
  • Nope it's not working.kubernetes rolling update is not waiting for pod's terminating state to complete before launching new pod. – Akshay Gopani Dec 12 '21 at 16:36
-2

I think the best way to achieve this goal is to use Statefulsets, some of the key features of Statefulsets are:

  • Ordered, automated rolling updates.
  • Ordered, graceful deployment and scaling.
Kareem Yasser
  • 122
  • 1
  • 3