0

I use custom Helm chart to deploy my project hosted on GitLab to Google Kubernetes cluster. It works smoothly. I have problem in following scenarios.

  1. The helm chart doesn't upgrade the deployment on Kubernetes even though the build image is new. My understanding is, it compares the SHA256 digest of a image deployed on Kubernetes and new images built in build stage and if there is difference it starts a new pod with new images and terminates the old pod. But it doesn't do that. Initially, I suspected it could be a problem with image pullPolicy as it was set to IfNotPresent. I have tried by setting it to Always but still it didn't work.
  2. When image pull policy is set to Always and a pod restart because of a failure or anything, then it gives imagePullBackOff error. I checked the secrets present in the namespace on kubernetes, It has dockerconfigjson secret, but still gives no authorization error. It starts to work when I deploy again using new CI/CD pipeline.

error logs

Warning  Failed     19m (x4 over 20m)   kubelet Failed to pull image "gitlab.digital-worx.de:5050/asvin/asvin-frontend/master:latest": rpc error: code = Unknown desc = Error response from daemon: Get https://gitlab.digital-worx.de:5050/v2/asvin/asvin-frontend/master/manifests/latest: unauthorized: HTTP Basic: Access denied
Warning  Failed     19m (x4 over 20m)   kubelet            Error: ErrImagePull
Warning  Failed     25s (x87 over 20m)  kubelet            Error: ImagePullBackOff

deployement.yaml

{{- if not .Values.application.initializeCommand -}}
apiVersion: {{ default "extensions/v1beta1" .Values.deploymentApiVersion }}
kind: Deployment
metadata:
  name: {{ template "name" . }}
  annotations:
    {{ if .Values.gitlab.app }}app.gitlab.com/app: {{ .Values.gitlab.app | quote }}{{ end }}
    {{ if .Values.gitlab.env }}app.gitlab.com/env: {{ .Values.gitlab.env | quote }}{{ end }}
  labels:
    app: {{ template "name" . }}
    track: "{{ .Values.application.track }}"
    tier: "{{ .Values.application.tier }}"
    chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
    release: {{ .Release.Name }}
    service: {{ .Values.ranking.service.name }}
spec:
{{- if or .Values.enableSelector (eq (default "extensions/v1beta1" .Values.deploymentApiVersion) "apps/v1") }}
  selector:
    matchLabels:
      app: {{ template "name" . }}
      track: "{{ .Values.application.track }}"
      tier: "{{ .Values.application.tier }}"
      release: {{ .Release.Name }}
      service: {{ .Values.ranking.service.name }}
{{- end }}
  replicas: {{ .Values.replicaCount }}
{{- if .Values.strategyType }}
  strategy:
    type: {{ .Values.strategyType | quote }}
{{- end }}
  template:
    metadata:
      annotations:
        checksum/application-secrets: "{{ .Values.application.secretChecksum }}"
        {{ if .Values.gitlab.app }}app.gitlab.com/app: {{ .Values.gitlab.app | quote }}{{ end }}
        {{ if .Values.gitlab.env }}app.gitlab.com/env: {{ .Values.gitlab.env | quote }}{{ end }}
{{- if .Values.podAnnotations }}
{{ toYaml .Values.podAnnotations | indent 8 }}
{{- end }}
      labels:
        app: {{ template "name" . }}
        track: "{{ .Values.application.track }}"
        tier: "{{ .Values.application.tier }}"
        release: {{ .Release.Name }}
        service: {{ .Values.ranking.service.name }}
    
    spec:
      volumes:
    {{- if .Values.ranking.configmap }}
    {{end}}
      imagePullSecrets:
{{ toYaml .Values.ranking.image.secrets | indent 10 }}
      containers:
      - name: {{ .Chart.Name }}
        image: "{{ .Values.ranking.image.repository }}:{{ .Values.ranking.image.tag }}"
        imagePullPolicy: {{ .Values.ranking.image.pullPolicy }}
    {{- if .Values.application.secretName }}
        envFrom:
        - secretRef:
            name: {{ .Values.application.secretName }}
        {{- end }}
        env:
        - name: INDEXER_URL
          valueFrom:
            secretKeyRef:
              name: {{.Release.Name}}-secret
              key: INDEXER_URL
        volumeMounts:
        ports:
        - name: "{{ .Values.ranking.service.name }}"
          containerPort: {{ .Values.ranking.service.internalPort }}
        livenessProbe:
{{- if eq .Values.livenessProbe.probeType "httpGet" }}
          httpGet:
            path: {{ .Values.livenessProbe.path }}
            scheme: {{ .Values.livenessProbe.scheme }}
            port: {{ .Values.ranking.service.internalPort }}
{{- else if eq .Values.livenessProbe.probeType "tcpSocket" }}
          tcpSocket:
            port: {{ .Values.ranking.service.internalPort }}
{{- else if eq .Values.livenessProbe.probeType "exec" }}
          exec:
            command:
{{ toYaml .Values.livenessProbe.command | indent 14 }}
{{- end }}
          initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }}
          timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }}
        readinessProbe:
{{- if eq .Values.readinessProbe.probeType "httpGet" }}
          httpGet:
            path: {{ .Values.readinessProbe.path }}
            scheme: {{ .Values.readinessProbe.scheme }}
            port: {{ .Values.ranking.service.internalPort }}
{{- else if eq .Values.readinessProbe.probeType "tcpSocket" }}
          tcpSocket:
            port: {{ .Values.ranking.service.internalPort }}
{{- else if eq .Values.readinessProbe.probeType "exec" }}
          exec:
            command:
{{ toYaml .Values.readinessProbe.command | indent 14 }}
{{- end }}
          initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }}
          timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }}
        resources:
{{ toYaml .Values.resources | indent 12 }}
      restartPolicy: Always
      enableServiceLinks: false
status: {}
{{- end -}}

values.yaml

# Default values for chart.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
strategyType:
enableSelector:
deploymentApiVersion: apps/v1
ranking:
  name: ranking
  image:
    repository: gitlab.iotcrawler.net:4567/ranking/ranking/master
    tag: latest
    pullPolicy: Always
    secrets:
    - name: gitlab-registry-demonstrator-murcia-parking-iotcrawler
  service:
    enabled: true
    annotations: {}
    name: ranking
    type: ClusterIP
    additionalHosts:
    commonName:
    externalPort: 3003
    internalPort: 3003
    production:
      url: parking.ranking.iotcrawler.eu
    staging:
      url: staging.parking.ranking.iotcrawler.eu
  configmap: true
podAnnotations: {}
application:
  track: latest
  tier: web
  migrateCommand:
  initializeCommand:
  secretName:
  secretChecksum:
hpa:
  enabled: false
  minReplicas: 1
  maxReplicas: 5
  targetCPUUtilizationPercentage: 80
gitlab:
  app:
  env:
  envName:
  envURL:
ingress:
  enabled: true
  url: 
  tls:
    enabled: true
    secretName: ""
  annotations:
    kubernetes.io/tls-acme: "true"
    kubernetes.io/ingress.class: "nginx"
  modSecurity:
    enabled: false
    secRuleEngine: "DetectionOnly"
    # secRules:
    #   - variable: ""
    #     operator: ""
    #     action: ""
prometheus:
  metrics: false
livenessProbe:
  path: "/"
  initialDelaySeconds: 15
  timeoutSeconds: 15
  scheme: "HTTP"
  probeType: "httpGet"
readinessProbe:
  path: "/"
  initialDelaySeconds: 5
  timeoutSeconds: 3
  scheme: "HTTP"
  probeType: "httpGet"
postgresql:
  enabled: true
  managed: false
  managedClassSelector:
    #   matchLabels:
    #     stack: gitlab (This is an example. The labels should match the labels on the CloudSQLInstanceClass)

resources:
#  limits:
#    cpu: 100m
#    memory: 128Mi
  requests:
#    cpu: 100m
#    memory: 128Mi

## Configure PodDisruptionBudget
## ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
#
podDisruptionBudget:
  enabled: false
  # minAvailable: 1
  maxUnavailable: 1

## Configure NetworkPolicy
## ref: https://kubernetes.io/docs/concepts/services-networking/network-policies/
#
networkPolicy:
  enabled: false
  spec:
    podSelector:
      matchLabels: {}
    ingress:
    - from:
      - podSelector:
          matchLabels: {}
      - namespaceSelector:
          matchLabels:
            app.gitlab.com/managed_by: gitlab

workers: {}
  # worker:
  #   replicaCount: 1
  #   terminationGracePeriodSeconds: 60
  #   command:
  #   - /bin/herokuish
  #   - procfile
  #   - start
  #   - worker
  #   preStopCommand:
  #   - /bin/herokuish
  #   - procfile
  #   - start
  #   - stop_worker
Rohit Bohara
  • 323
  • 4
  • 14
  • There's a lot of missing information to answer this issue. For the very least you should attach the helm chart you're using for the deployment. – Yaron Idan Sep 27 '20 at 18:30
  • Have you managed to solve your issue with the answer provided by user Taybur Rahaman? – Dawid Kruk Sep 28 '20 at 16:11
  • as mentioned, you should supply (at least the relevant parts) of your Helm chart. In particular, are you changing the image tag? If you are using a static tag like `latest`, you may get unexpected results. You should use a specific dynamic tag (or SHA256 digest) if you want to be sure which image is loading. If you do this, you also can leave the `imagePullPolicy` as `IfNotPresent `. For #2 you need to supply the exact `imagePullBackOff ` error to get help there. – ldg Sep 30 '20 at 17:22
  • sorry for delay. @YaronIdan I have added the deployment and yaml file. – Rohit Bohara Oct 06 '20 at 18:28
  • @gelfan, yes this is a very good idea. I have been trying to incorporate these changes. Do you know how can I get the SHA256 digest in deployment file? I use Gitlab Auto DevOps pipeline. – Rohit Bohara Oct 06 '20 at 18:31
  • See this issue for an example of getting SHA256 as part of your helm template - https://github.com/helm/helm/issues/2639#issuecomment-445271056 – Yaron Idan Oct 06 '20 at 18:42
  • @YaronIdan the problem is to get the SHA256 of the image built in the gitlab build stage. The llink you shared shows how to compute sha of a string or file. I have added the imagepullbackoff error logs also. – Rohit Bohara Oct 20 '20 at 09:47
  • I solved the problem of ImagePullBackOff. Instead of using secret generated by auto devops pipeline, I generated the secret using personal token. – Rohit Bohara Oct 20 '20 at 13:02

2 Answers2

0

Helm upgrade does not recreate the pod unless you specify in upgrade command.

You can try in helm 2 --force --recreate-pods to force pods recreate.

You could try like this in helm 2

helm upgrade release_name chartname --namespace namespace --install   --force --recreate-pods  

However,The issue is you have to face downtime.Please see this answer for details.

Taybur Rahman
  • 1,347
  • 8
  • 20
  • This is inaccurate. OP mentions that the image is changed, and that should trigger a recreation of all pods in the deployment. – Yaron Idan Sep 28 '20 at 18:48
  • I am not using helm directly on the kubernetes cluster. I deploy it using Gitlab's auto devops function. It uses helm in the background. – Rohit Bohara Oct 20 '20 at 09:49
0

I solved both the issues. I had to research on Gitlab's Auto Devops feature. It uses auto deploy image for creating dockerjson secret and install/upgrade custom helm chart on Kubernetes cluster. It uses helm upgrade command to upgrade/install chart. In the command, it also sets string image.tag to CI_APPLICATION_TAG:-$CI_COMMIT_SHA$CI_COMMIT_TAG.

  1. In the deployment file, I used the image.tag value as shown below.

    image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"

  2. I solved second problem by creating a docker-registry secret.

Rohit Bohara
  • 323
  • 4
  • 14