I'm currently securing my EKS
cluster through NetworkPolicies
and Calico
, so I'm doing some tests to see how these work, I deployed a structure as follows for a PoC:
Internet -> NLB -> Nginx Pod
The Nginx pod was deployed using deployment.yaml with only 1 replica.
I also deployed a Kubernetes NodePort Service, the NLB is pointing to that service. I added a simple NetworkPolicy which blocks all traffic except the one coming from my cidr
(Internet). This works as intended.
However, checking the nginx logs
, the IP
showed there is not the "real IP
", it is a node's IP
(which is expected behaviour as kube-proxy
redirects from one node to the correct one), and checking the headers, there is no sign of X-Forwarded-For
or similar.
I deployed an echo-server
with the same structure in order to see the full request and headers. To my surprise, the NetworkPolicy
did no longer work, I wasn't able to use my real IP
on the NetworkPolicy
, only local IPs. I had to allow the nodes' cidr
for it to work.
From the official Kubernetes
page (NetworkPolicies: Behavior of to and from selectors)
Cluster ingress and egress mechanisms often require rewriting the source or destination IP of packets. In cases where this happens, it is not defined whether this happens before or after NetworkPolicy processing, and the behaviour may be different for different combinations of network plugin, cloud provider, Service implementation, etc.
However I'm not sure this is the cause of this behaviour, and if so:
- What causes
NetworkPolicies
to be able to get real IP or not? Isn't it supposed to act before reaching the service? - How was I able to use real source IP as a filter when the service was
Nginx
but not with other services> (I also tried addingNetworkPolicies
directly to theKibana
pod and it doesn't work)