46

I am trying to check the status of a pod using kubectl wait command through this documentation. Following is the command that i am trying

kubectl wait --for=condition=complete --timeout=30s -n d1 job/test-job1-oo-9j9kj

Following is the error that i am getting

Kubectl error: status.conditions accessor error: Failure is of the type string, expected map[string]interface{}

and my kubectl -o json output can be accessed via this github link.

Can someone help me to fix the issue

starwarswii
  • 2,187
  • 1
  • 16
  • 19
Auto-learner
  • 1,411
  • 7
  • 26
  • 43
  • https://github.com/kubernetes/kubernetes/issues/66439 – Ijaz Ahmad Nov 29 '18 at 10:48
  • @ljaz can you please suggest me if any other approach is there to achive this.I have a list of pods which will take 1hr or less to complete. I want to know the status if pods are completed or not by that time – Auto-learner Nov 29 '18 at 10:49
  • are you trying to check if a pod is completed or not? or trying to check if a job is completed? In case of job, that command works to me. – Emruz Hossain Nov 29 '18 at 11:16
  • Yes I am trying to wait for the job to complete and job may take 1 hr or less..I found wait command useful but it is not working – Auto-learner Nov 29 '18 at 11:25

3 Answers3

57

To wait until your pod is running, check for "condition=ready". In addition, prefer to filter by label, rather than specifying pod id. For example:

$ kubectl wait --for=condition=ready pod -l app=netshoot 
pod/netshoot-58785d5fc7-xt6fg condition met

Another option is rollout status - To wait until the deployment is done:

$ kubectl rollout status deployment netshoot
deployment "netshoot" successfully rolled out

Both options works great in automation scripts, when it is required to wait for an app to be installed. However, as @CallMeLaNN noted for the second option, deployment "rolled out" is not necessarily without errors.

Noam Manos
  • 15,216
  • 3
  • 86
  • 85
  • 3
    Any problem with `kubectl wait --for=condition=ready` above from your experience? I think it doesn't quite reliable. Sometimes it get timed out after apply/rollout restart even thought `kubectl get pods` returns ready. – CallMeLaNN Jul 04 '20 at 06:17
  • I have a followup question here https://stackoverflow.com/q/62726150/186334 – CallMeLaNN Jul 04 '20 at 06:44
  • @CallMeLaNN that's why I try to use [rollout](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#rollout) when possible (only for deployments, daemonsets, and statefulsets). – Noam Manos Jul 06 '20 at 10:04
  • Currently I'm using `rollout restart` for development after replace `latest` tag and `set image` tag for production. That will recreate new pods. Then I want to wait and fail the pipeline if not ready. IMO `kubectl rollout status` only wait until rollout completed regardless if the pod ready or error. – CallMeLaNN Jul 06 '20 at 13:44
  • Thanks @CallMeLaNN. I've updated answer regarding rollout status. – Noam Manos Jul 06 '20 at 15:02
  • @CallMeLaNN I have the same problem. Some time it wait until timeout, some time it works properly. Any alternative ways? – Tkaewkunha Sep 30 '20 at 15:16
  • can you suggest syntax of *wait deploy/name condition=replicas=0(all pods terminated)*? – Lei Yang Mar 22 '23 at 01:42
18

This totally looks like you are running kubectl wait --for=condition=complete on a Pod as described in your output rather than a Job.

A pod doesn't have the --for=condition=complete option. Exactly, what I get when I run it on a pod:

$ kubectl wait --for=condition=complete pod/mypod-xxxxxxxxxx-xxxxx
error: .status.conditions accessor error: Failure is of the type string, expected map[string]interface{}
Rico
  • 58,485
  • 12
  • 111
  • 141
3

As outlined by Rico you can't wait for the complete state on the pod, assuming you want to wait for the job to complete use the following

kubectl wait --for=condition=complete --timeout=30s -n d1 job/test-job1
Daniel Robinson
  • 376
  • 4
  • 10