My k8s 1.12.8
cluster (created via kops) has been running fine for 6+ months. Recently, something caused both kube-scheduler
and kube-controller-manager
on the master node to die and restart:
SyncLoop (PLEG): "kube-controller-manager-ip-x-x-x-x.z.compute.internal_kube-system(abc123)", event: &pleg.PodLifecycleEvent{ID:"abc123", Type:"ContainerDied", Data:"def456"}
hostname for pod:"kube-controller-manager-ip-x-x-x-x.z.compute.internal" was longer than 63. Truncated hostname to :"kube-controller-manager-ip-x-x-x-x.z.compute.inter"
SyncLoop (PLEG): "kube-scheduler-ip-x-x-x-x.z.compute.internal_kube-system(hij678)", event: &pleg.PodLifecycleEvent{ID:"hij678", Type:"ContainerDied", Data:"890klm"}
SyncLoop (PLEG): "kube-controller-manager-ip-x-x-x-x.eu-west-2.compute.internal_kube-system(abc123)", event: &pleg.PodLifecycleEvent{ID:"abc123", Type:"ContainerStarted", Data:"def345"}
SyncLoop (container unhealthy): "kube-scheduler-ip-x-x-x-x.z.compute.internal_kube-system(hjk678)"
SyncLoop (PLEG): "kube-scheduler-ip-x-x-x-x.z.compute.internal_kube-system(ghj567)", event: &pleg.PodLifecycleEvent{ID:"ghj567", Type:"ContainerStarted", Data:"hjk768"}
Ever since kube-scheduler
and kube-controller-manager
restarted, kubelet is completely unable to get or update any node status:
Error updating node status, will retry: failed to patch status "{"status":{"$setElementOrder/conditions":[{"type":"NetworkUnavailable"},{"type":"OutOfDisk"},{"type":"MemoryPressure"},{"type":"DiskPressure"},{"type":"PIDPressure"},{"type":"Ready"}],"conditions":[{"lastHeartbeatTime":"2020-08-12T09:22:08Z","type":"OutOfDisk"},{"lastHeartbeatTime":"2020-08-12T09:22:08Z","type":"MemoryPressure"},{"lastHeartbeatTime":"2020-08-12T09:22:08Z","type":"DiskPressure"},{"lastHeartbeatTime":"2020-08-12T09:22:08Z","type":"PIDPressure"},{"lastHeartbeatTime":"2020-08-12T09:22:08Z","type":"Ready"}]}}" for node "ip-172-20-60-88.eu-west-2.compute.internal": Patch https://127.0.0.1/api/v1/nodes/ip-172-20-60-88.eu-west-2.compute.internal/status?timeout=10s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Error updating node status, will retry: error getting node "ip-x-x-x-x.z.compute.internal": Get https://127.0.0.1/api/v1/nodes/ip-x-x-x-x.z.compute.internal?timeout=10s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Error updating node status, will retry: error getting node "ip-x-x-x-x.z.compute.internal": Get https://127.0.0.1/api/v1/nodes/ip-x-x-x-x.z.compute.internal?timeout=10s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Error updating node status, will retry: error getting node "ip-x-x-x-x.z.compute.internal": Get https://127.0.0.1/api/v1/nodes/ip-x-x-x-x.z.compute.internal?timeout=10s: context deadline exceeded
Error updating node status, will retry: error getting node "ip-x-x-x-x.z.compute.internal": Get https://127.0.0.1/api/v1/nodes/ip-x-x-x-x.z.compute.internal?timeout=10s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Unable to update node status: update node status exceeds retry count
The cluster is completely unable to perform any updates in this state.
- What can cause the master node to lose connectivity to nodes like this?
- Is the 2nd line in the first log output 'Truncated hostname..' a potential source of the issue?
- How can I further diagnose what is actually causing the get/update node actions to fail?