0

We have a cluster created in GKE. It was running fine but Kube-dns restarted automatically at 12:00AM along with few pods. There were 2 kube-dns pods in the namespace kube-system but only one got restarted with https://i.stack.imgur.com/S500G.png errors. We also noticed that other pods were untouched.

kubectl version:

Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T16:23:09Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.5-gke.10", GitCommit:"f5949b3427099d4e410ef96d6e0fea3cd4794e10", GitTreeState:"clean", BuildDate:"2019-04-10T19:05:37Z", GoVersion:"go1.11.5b4", Compiler:"gc", Platform:"linux/amd64"}

logs snippet:

https://i.stack.imgur.com/S500G.png 

We expected it would have stuck and manual restart required but it automatically restarted at 12:00AM along with other pods. How is this possible that auto recover of kube-dns in GKE?

Suresh Vishnoi
  • 17,341
  • 8
  • 47
  • 55
Mahesh
  • 171
  • 3
  • 8
  • 1) Please copy the logs as text, not as screenshots; I have never seen a terminal emulator that couldn't copy text, so do it. 2) Include what Kubernetes thinks about the pods, i.e. output of `kubectl -owide get pods` and `kubectl describe pod ` for each pod (and then you might be asked for more, but that should be the start). – Jan Hudec Jun 03 '19 at 13:31

1 Answers1