I am shutting down my k8s node manually to see if this affect the master.
After shutdown I check status of nodes:
kubectl get nodes
The node which went down is still seen Ready in Status. As a consequence k8s still tries to schedule pods on this node but actually cannot. And even worst it doesn't reschedule pods on other healthy nodes.
After a while (5-10 mins) k8s notices the node has gone.
Is that expected behavior? If not how can I fix this?
I did research do find out how K8s checks node health, I couldn't find anything valuable.