I setup a k8s cluster, which have one master node and one worker node, coredns
pod is schedule to worker node and works fine. When I delete worker node, coredns
pod is schedule to master node, but have CrashLoopBackOff
state, the log of coredns
pod as following:
E0118 10:06:02.408608 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host
E0118 10:06:02.408752 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
This article say DNS components must run on a regular cluster node (rather than the Kubernetes master):
In addition to Kubernetes core components like api-server, scheduler, controller-manager running on a master machine there are a number of add-ons which, for various reasons, must run on a regular cluster node (rather than the Kubernetes master). Some of these add-ons are critical to a fully functional cluster, such as Heapster, DNS, and UI.
Can anyone explain why coredns
pod can't run on master node?