4

I'm using Kubernetes Continuous Deploy Plugin to deploy and upgrade a Deployment on my Kubernetes Cluster. I'm using pipeline and this is the Jenkinsfile:

pipeline {
    environment {
        JOB_NAME = "${JOB_NAME}".replace("-deploy", "")
        REGISTRY = "my-docker-registry"
    }
    agent any
    stages {
        stage('Fetching kubernetes config files') {
            steps {
                git 'git_url_of_k8s_configurations'
            }
        }
        stage('Deploy on kubernetes') {
            steps {
                kubernetesDeploy(
                    kubeconfigId: 'k8s-default-namespace-config-id',
                    configs: 'deployment.yml',
                    enableConfigSubstitution: true
                )
            }
        }
    }
}

Deployment.yml instead is:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: ${JOB_NAME}
spec:
  replicas: 1
  template:
    metadata:
      labels:
        build_number: ${BUILD_NUMBER}
        app: ${JOB_NAME}
        role: rolling-update
    spec:
      containers:
      - name: ${JOB_NAME}-container        
        image: ${REGISTRY}/${JOB_NAME}:latest
        ports:
        - containerPort: 8080
        envFrom:
            - configMapRef:
                name: postgres
      imagePullSecrets:
      - name: regcred
  strategy:
    type: RollingUpdate

In order to let Kubernetes understand that Deployment is changed ( so to upgrade it and pods ) I used the Jenkins build number as annotation:

...
metadata:
  labels:
    build_number: ${BUILD_NUMBER}
...

The problem or my misunderstanding:

If Deployment does not exists on Kubernetes, all works good, creating one Deployment and one ReplicaSet.

If Deployment still exists and an upgrade is applied, Kubernetes creates a new ReplicaSet:

Before first deploy

before first deploy

First deploy

first deploy

Second deploy

second deploy

Third deploy

enter image description here

As you can see, each new Jenkins deploy will update corretly the deployment but creates a new ReplicaSet without removing the old one.

What could be the issue?

Jayyrus
  • 12,961
  • 41
  • 132
  • 214

1 Answers1

5

This is expected behavior. Every time you update a Deployment a new ReplicaSet will be created. But, old ReplicaSet will be kept so that you can roll-back to previous state in case of any problem in your updated Deployment.

Ref: Updating a Deployment

However, you can limit how many ReplicaSet should be kept through spec.revisionHistoryLimit field. Default value is 10. Ref: RevisionHistoryLimit

Emruz Hossain
  • 4,764
  • 18
  • 26
  • ok, thank you. But i can see that deployment doesn't kill older pods, why? – Jayyrus Nov 23 '18 at 11:01
  • Well, that's unexpected. Deployment should kill old pod once new pod is ready. There must be something going on. Try configuring `maxUnavailable` and `maxSurge` field for RollingUpdate. https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment – Emruz Hossain Nov 23 '18 at 11:38
  • yes, the problem was with the build_number. now I've edited using a dynamic image_tag and all works well! Thank you very much – Jayyrus Nov 23 '18 at 12:12