It looks to me that you are more interested in treating the symptoms of your problem than the actual reasons behind them.
This is for quick troubleshooting where I do not want to stop the rest
to add a bypass for the status of this job.
I think that quicker way would be to actually make sure that your other jobs are less dependable on this one instead of trying to force Kubernetes to mark this Job/Pod as successful.
The closest thing I could get to your goal was to curl
api-server directly with kube-proxy
. But that solution only works if the job is failed first and unfortunately it does not work with running pods.
For this example I used job that exits with status 1:
containers:
- name: job
image: busybox
args:
- /bin/sh
- -c
- date; echo sleeping....; sleep 5s; exit 1;
Then run kubectl-proxy
:
➜ ~ kubectl proxy --port=8080 &
[1] 18372
➜ ~ Starting to serve on 127.0.0.1:8080
And post the status to api-server:
curl localhost:8080/apis/batch/v1/namespaces/default/jobs/job3/status -XPATCH -H "Accept: application/json" -H "Content-Type: application/strategic-merge-patch+json" -d '{"status": {"succeeded": 1}}'
],
"startTime": "2021-01-28T14:02:31Z",
"succeeded": 1,
"failed": 1
}
}%
If then check the job status I can see that it was marked as completed.
➜ ~ k get jobs
NAME COMPLETIONS DURATION AGE
job3 1/1 45s 45s
PS. I tried this way to set up the status to successful/completed for either the job or pod but that was not possible. The status changed for moment and then controller-manager
reverted the status to running. Perhaps this small window
with status changed might be what you want and it will allow your other jobs to move on. I'm merely assuming this since I don't the details.
For more reading how to access API in that way please have a look at the using kubectl docs.