3

I deploy my App into the Kubernetes cluster using Helm. App works with database, so i have to run db migrations before installing new version of the app. I run migrations with Kubernetes Job object using Helm "pre-upgrade" hook.

The problem is when the migration job starts old version pods are still working with database. They can block objects in database and because of that migration job may fail.

So, i want somehow to automatically stop all the pods in cluster before migration job starts. Is there any way to do that using Kubernetes + Helm? Will appreciate all the answers.

Franzis
  • 33
  • 5

1 Answers1

3

There are two ways I can see that you can do this.

First option is to scale down the pods before the deployment (for example, via Jenkins, CircleCI, GitLab CI, etc)

kubectl scale --replicas=0 -n {namespace} {deployment-name}
helm install .....

The second option (which might be easier depending on how you want to maintain this going forward) is to add an additional pre-upgrade hook with a higher priority than the migrations hook so it runs before the upgrade job; and then use that do the kubectl scale down.

Blender Fox
  • 4,442
  • 2
  • 17
  • 30
  • Thanks for your answer! I thought about the same direction, but there was an additional question to me - does `kubectl scale --replicas=0` command wait until all the pods really stop and only after that pass control to the next priority hook? Or i should check pods status somehow else? – Franzis Jan 31 '22 at 11:29
  • 1
    Yes, it will send the terminate signal to all the replicas then return. Personally, I do this, then wait for the pods in `Terminating` status to disappear before continuing, but that's just me and my pedantic self. – Blender Fox Jan 31 '22 at 11:31