0

Why my pod error "Back-off restarting failed container" when I have imagePullPolicy: "Always", Before It worked but today I deploy it on other machine, it show that error

My Yaml:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: couchdb
  labels:
    app: couch
spec:
  replicas: 3
  serviceName: "couch-service"
  selector:
    matchLabels:
      app: couch
  template:
    metadata:
      labels:
        app: couch # pod label
    spec:
      containers:
      - name: couchdb
        image: couchdb:2.3.1
        imagePullPolicy: "Always"
        env:
        - name: NODE_NETBIOS_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: NODENAME
          value: $(NODE_NETBIOS_NAME).couch-service # FQDN in vm.args
        - name: COUCHDB_USER
          value: admin
        - name: COUCHDB_PASSWORD
          value: admin
        - name: COUCHDB_SECRET
          value: b1709267
        - name: ERL_FLAGS
          value: "-name couchdb@$(NODENAME)"
        - name: ERL_FLAGS
          value: "-setcookie b1709267" #   the “password” used when nodes connect to each other.
        ports:
        - name: couchdb
          containerPort: 5984
        - name: epmd
          containerPort: 4369
        - containerPort: 9100
        volumeMounts:
          - name: couch-pvc
            mountPath: /opt/couchdb/data
  volumeClaimTemplates:
  - metadata:
      name: couch-pvc
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 10Gi
      selector:
        matchLabels:
          volume: couch-volume      

I describe it:

Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  23s                default-scheduler  Successfully assigned default/couchdb-0 to b1709267node1
  Normal   Pulled     17s                kubelet            Successfully pulled image "couchdb:2.3.1" in 4.368553213s
  Normal   Pulling    16s (x2 over 22s)  kubelet            Pulling image "couchdb:2.3.1"
  Normal   Created    10s (x2 over 17s)  kubelet            Created container couchdb
  Normal   Started    10s (x2 over 17s)  kubelet            Started container couchdb
  Normal   Pulled     10s                kubelet            Successfully pulled image "couchdb:2.3.1" in 6.131837401s
  Warning  BackOff    8s (x2 over 9s)    kubelet            Back-off restarting failed container

What shound I do? Thanks

David Maze
  • 130,717
  • 29
  • 175
  • 215
cksawd
  • 11
  • 1
  • 2
  • 5
  • Hey! imagePullPolicy will only configure how you fetch docker images. Not how your app reacts on failure. You may want to check the logs, and the events (describe your pod) to see "why" it is having a restart. Have a look to https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy as well. For debugging, setting it to "Never" can help – François Dec 21 '21 at 10:32
  • Previously it worked when i add `imagePullPolicy: "Always"`. But now it don't worked – cksawd Dec 21 '21 at 10:52
  • Your container is crashing, and repeatedly pulling the same image isn't going to make a difference. – David Maze Dec 21 '21 at 11:04
  • Please read more about the imagePullPolicy and RestartPolicy fields, with better understanding you will be able to understand where your failure is coming from. Have a look to some google resources as well: https://komodor.com/learn/how-to-fix-crashloopbackoff-kubernetes-error/ – François Dec 21 '21 at 11:19

3 Answers3

3

ImagePullPolicy doesn't really have much to do with container restarts. It only determines on what occasion should the image be pulled from the container registry, read more here

If a container in a pod keeps restarting - it's usually because there is some error in the command that is the entrypoint of this container. There are 2 places where you should be able to find additional information that should point you to the solution:

  • logs of the pod (check using kubectl logs _YOUR_POD_NAME_ command)
  • description of the pod (check using kubectl describe _YOUR_POD_NAME_ command)
andrzejwp
  • 922
  • 4
  • 11
  • 1
    I described it and it shows an error `Back-off restarting failed container` – cksawd Dec 21 '21 at 10:53
  • Then please check the logs – andrzejwp Dec 21 '21 at 12:15
  • I check log and It don't show anything – cksawd Dec 21 '21 at 12:26
  • It's highly unlikely that there's absolutely no logs, yet the container is producing an error. Maybe what you should be looking at is installing CouchDB using a Helm chart instead - https://artifacthub.io/packages/helm/couchdb/couchdb - this should at least give you a working example that you can use to figure out what went wrong in your case. – andrzejwp Dec 21 '21 at 13:25
  • I found my error in the logs. Thanks – Loich Apr 25 '23 at 20:48
0

The CouchDB k8s sample that you are using is out dated already and contained bug (eg. ERL_FLAGS was defined twice). You should use CouchDB helm chart instead. A basic CouchDB can be install with:

helm repo add couchdb https://apache.github.io/couchdb-helm

helm install couchdb couchdb --set couchdbConfig.couchdb.uuid=$(curl https://www.uuidgenerator.net/api/version4 2>/dev/null | tr -d -)

kubectl get secret couchdb-couchdb -o go-template='{{ .data.adminPassword }}' | base64 -d
gohm'c
  • 13,492
  • 1
  • 9
  • 16
-1

Check if you have created secret with correct credentials and added secret