0

Recently all the cronjobs on my GKE cluster started showing some weird behaviour. With all the configurations remaining same for the cronjobs, out of nowhere, now my cronjobs are getting triggered, executed & completed but the Pod is left behind in a Not Ready state. I've been using successfulJobsHistoryLimit and failedJobsHistoryLimit as 0. Also, tried concurrencyPolicy: "Replace" and spec.ttlSecondsAfterFinished: 1 but still nothing changed. I want to get rid of these as there are some crons running at every 5 minutes and so, generating huge list of not ready completed pods here when I run kubectl get pods

NAME                            READY    STATUS        RESTARTS      AGE
my-cron-27711665-gjdk4          0/1      Completed     0             46m

Here's the cronjob.yaml file:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: my-cron
spec:
  schedule: "0 */1 * * *"
  successfulJobsHistoryLimit: 0
  failedJobsHistoryLimit: 0
  jobTemplate:
    spec:
      template:
        spec:
          imagePullSecrets:
          - name: cronjob-secret
          containers:
          - name: my-cron
            image: kubernetes_cronjob_new:dev
            command: ["/bin/bash","-c"]
            args:
              - po1=$(kubectl get pods  | awk '/mybackend/ {printf($1);exit}');
                kubectl exec -i $po1 -- python manage.py run_cron my_cron False;
                exit 0;
          restartPolicy: OnFailure

Currently working on Kubernetes Version: 1.22

Can anyone help me here? How to get rid of these or why this started happening all of sudden?

0 Answers0