0

I have applied constraints in minikube. I have build a go program as image which will be executed as pod by applying pod.yaml file. When i get the status of pod using "kubectl get pods", after few seconds it is showing "crashloopbackoff" as status. Then it shows as warning "Back-off restarting failed container". Why the pod is not running permanently successful without showing crashoopbackoff error or any restart warning status.

pod.yaml

apiVersion: v1
kind: Pod
metadata:
 name: opa
 labels:
name: opa
namespace: test
owner: name.agilebank.demo
spec:
containers:
  - name: opa
    image: user-name/image-name
resources:
  limits:
    memory: "1Gi"
    cpu: "200m"
ports:
  - containerPort: 8000

     
    `kubectl get pods`
     NAME   READY   STATUS             RESTARTS   AGE
     opa    0/1     CrashLoopBackOff   12         41m


     `kubectl describe pod pod-name`
      Name:         opa
      Namespace:    default
      Priority:     0
      Node:         minikube/ip
      Start Time:   Mon, 23 Aug 2021 19:31:52 +0530
      Labels:       name=opa
          namespace=test
          owner=name.agilebank.demo
      Annotations:  <none>
      Status:       Running
      IP:           ip-no
      IPs:
        IP:  ip-no
        Containers:
      opa:
        Container ID:   docker://no
        Image:          username/img-name
        Image ID:       docker-pullable://username/img-name
        Port:           8000/TCP
        Host Port:      0/TCP
        State:          Waiting
        Reason:       CrashLoopBackOff
        Last State:     Terminated
        Reason:       Completed
        Exit Code:    0
        Started:      Mon, 23 Aug 2021 20:13:02 +0530
        Finished:     Mon, 23 Aug 2021 20:13:05 +0530
        Ready:          False
        Restart Count:  12
        Limits:
          cpu:     200m
          memory:  1Gi
        Requests:
          cpu:        200m
          memory:     1Gi
          Environment:  <none>
          Mounts:
            /var/run/secrets/kubernetes.io/serviceaccount from            default-token-5zjvn (ro)
          Conditions:
          Type              Status
          Initialized       True 
          Ready             False 
          ContainersReady   False 
          PodScheduled      True 
          Volumes:
             default-token-5zjvn:
          Type:        Secret (a volume populated by a Secret)
          SecretName:  default-token-5zjvn
          Optional:    false
          QoS Class:       Guaranteed
          Node-Selectors:  <none>
          Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
          node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
          Events:
          Type     Reason     Age                  From               Message
           ----     ------     ----                 ----               -------
          Normal   Scheduled  45m                  default-scheduler  Successfully assigned default/opa to minikube
          Normal   Pulling    45m                  kubelet            Pulling image "usernaame/img-name"
          Normal   Pulled     41m                  kubelet            Successfully pulled image "username/img-name"
          Normal   Created    39m (x5 over 41m)    kubelet            Created container opa
          Normal   Started    39m (x5 over 41m)    kubelet            Started container opa
          Normal   Pulled     30m (x7 over 41m)    kubelet            Container image "username/img-name" already present on machine
         Warning  BackOff    19s (x185 over 41m)  kubelet            Back-off restarting failed container
thara
  • 1
  • 2
  • Well with no further details we won't help you. I am going to ask you the same question: "what is it crashing?". Can you provide some log, errors, details? This error usualy happen when the pods keeps failing to start, kubernetes keep destroying and recreating the pod. – Anthony Raymond Aug 23 '21 at 14:24
  • There is a good chance that the application inside the pod is failing. Try to find what error you are getting. You can check for the logs of the previous container instance, using the command `Kubectl logs POD --previous`. – Arutsudar Arut Aug 23 '21 at 14:30
  • kubectl get pods NAME READY STATUS RESTARTS AGE opa 0/1 CrashLoopBackOff 11 36m @AnthonyRaymond this is my pod status – thara Aug 23 '21 at 14:40
  • @ArutsudarArut yes kubectl logs pod shows me the output whatever i needed, but it shows after few seconds when i check pod status Warning BackOff 98s (x162 over 37m) kubelet Back-off restarting failed container – thara Aug 23 '21 at 14:44
  • @thara Have you set liveness/readiness probes? If yes, and, if those values are too short for app initialisation time, then Kubernetes may be killing the app too early. – Arutsudar Arut Aug 23 '21 at 14:50
  • @ArutsudarArut sry i dont have any idea about that. can you help me with that what you are saying? Now i have edited my question with pod file and describe details. Can you please check that? – thara Aug 23 '21 at 15:48
  • try "kubectl logs $podname" or try "kubectl logs §podname --previous" . you can also check "kubectl get events" and look for something suspicious or add the output of those commands to the question – meaningqo Aug 23 '21 at 18:51

1 Answers1

0

There is something wrong with your application. You app exits with Exit Code: 0

Probably it executing what you told to execute and finishing work, if you want to keep your container alive your application should be running inside of that container.

This is not probe error. With probe error you could expect event similar to this:

  Warning  Unhealthy  13s (x4 over 43s)  kubelet            Liveness probe failed: HTTP probe failed with statuscode: 404

Is your appication in the container running all the time or what? If you want to execute it and it finishes you should not use Pod. You should use Job.

Daniel Hornik
  • 1,957
  • 1
  • 14
  • 33