I'm using Kubernetes 1.22 on AWS Cloud (EKS) and i'm facing a problem when any node on the cluster is drained. When any node is marked as "Scheduling Disabled" some requests that the "pod A" is reciving in another node stop comming or something like that.
Scale down ocurring on "node A" and "node B":
And pods that were not on "node A" and "nobe B" are affected.
To find out the problem I made some changes on Ingress, It was running as a deployment on specific nodes just for them so I changed the ingress to run as a DeamonSets but the behavior mentioned did not change.
On the ingress metrics I have the information that controller connections, upstream reponse time and upstream service lantency keeps the same during the behavior.
I expected that when any node is drained, the requests that are coming to "application A" that have no relation on drained node dont be affected by the drained node.