I have a Kubernetes Cronjob that runs on GKE and runs Cucumber JVM tests. In case a Step fails due to assertion failure, some resource being unavailable, etc., Cucumber rightly throws an exception which leads the Cronjob job to fail and the Kubernetes pod's status changes to ERROR
. This leads to creation of a new pod that tries to run the same Cucumber tests again, which fails again and retries again.
I don't want any of these retries to happen. If a Cronjob job fails, I want it to remain in the failed status and not retry at all. Based on this, I have already tried setting backoffLimit: 0
in combination with restartPolicy: Never
in combination with concurrencyPolicy: Forbid
, but it still retries by creating new pods and running the tests again.
What am I missing? Here's my kube manifest for the Cronjob:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: quality-apatha
namespace: default
labels:
app: quality-apatha
spec:
schedule: "*/1 * * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
backoffLimit: 0
template:
spec:
containers:
- name: quality-apatha
image: FOO-IMAGE-PATH
imagePullPolicy: "Always"
resources:
limits:
cpu: 500m
memory: 512Mi
env:
- name: FOO
value: BAR
volumeMounts:
- name: FOO
mountPath: BAR
args:
- java
- -cp
- qe_java.job.jar:qe_java-1.0-SNAPSHOT-tests.jar
- org.junit.runner.JUnitCore
- com.liveramp.qe_java.RunCucumberTest
restartPolicy: Never
volumes:
- name: FOO
secret:
secretName: BAR
Is there any other Kubernetes Kind
I can use to stop the retrying?
Thank you!