2

I have a multi-node cluster setup. There are Kubernetes network policies defined for the pods in the cluster. I can access the services or pods using their clusterIP/podIP only from the node where the pod resides. For services with multiple pods, I cannot access the service from the node at all (I guess when the service directs the traffic to the pod with the resident node same as from where I am calling then the service will work).

Is this the expected behavior? Is it a Kubernetes limitation or a security feature? For debugging etc., we might need to access the services from the node. How can I achieve it?

1 Answers1

2

No, it is not the expected behavior for Kubernetes. Pods should be accessible for all the nodes inside the same cluster through their internal IPs. ClusterIP service exposes the service on a cluster-internal IP and making it reachable from within the cluster - it is basically set by default for all the service types, as stated in Kubernetes documentation.

Services are not node-specific and they can point to a pod regardless of where it runs in the cluster at any given moment in time. Also make sure that you are using the cluster-internal port: while trying to reach the services. If you still can connect to the pod only from node where it is running, you might need to check if something is wrong with your networking - e.g, check if UDP ports are blocked.

EDIT: Concerning network policies - by default, a pod is non-isolated either for egress or ingress, i.e. if no NetworkPolicy resource is defined for the pod in Kubernetes, all traffic is allowed to/from this pod - so-called default-allow behavior. Basically, without network policies all pods are allowed to communicate with all other pods/services in the same cluster, as described above. If one or more NetworkPolicy is applied to a particular pod, it will reject all traffic that is not explicitly allowed by that policies (meaning, NetworkPolicythat both selects the pod and has "Ingress"/"Egress" in its policyTypes) - default-deny behavior.

What is more:

Network policies do not conflict; they are additive. If any policy or policies apply to a given pod for a given direction, the connections allowed in that direction from that pod is the union of what the applicable policies allow.

So yes, it is expected behavior for Kubernetes NetworkPolicy - when a pod is isolated for ingress/egress, the only allowed connections into/from the pod are those from the pod's node and those allowed by the connection list of NetworkPolicy defined. To be compatible with it, Calico network policy follows the same behavior for Kubernetes pods. NetworkPolicy is applied to pods within a particular namespace - either the same or different with the help of the selectors.

As for node specific policies - nodes can't be targeted by their Kubernetes identities, instead CIDR notation should be used in form of ipBlock in pod/service NetworkPolicy - particular IP ranges are selected to allow as ingress sources or egress destinations for pod/service.

Whitelisting Calico IP addresses for each node might seem to be a valid option in this case, please have a look at the similar issue described here.

anarxz
  • 817
  • 1
  • 14
  • Thank you for your response. Our setup has Calico ip-in-ip enabled + natoutgoing true. It seems when i whitelist the calico tunl0 ip addresses of all nodes, service can be called from all nodes. Tcmpdump shows the source ip for calling service from one node to a pod in another node is the tunl0 ip of the node. – Parvathy Mohan Feb 24 '22 at 16:16
  • Please note this behavior is only after the implementation of Kubernetes network policy for pods – Parvathy Mohan Feb 24 '22 at 16:19
  • @ParvathyMohan How is your ``IPPool`` resource is configured, especially ``ipipMode``? Could you please also check the startup logs of calico - as per [troubleshooting guide](https://projectcalico.docs.tigera.io/maintenance/troubleshoot/troubleshooting)? – anarxz Feb 26 '22 at 01:33
  • ipipMode Always, natOutgoing: true. My setup don't have any problem other than that if I want to access a service from the node of the cluster for testing/debugging, network policy prevents that. If I write the network policy to allow traffic from the ipBlock cidr ipv4ipiptunneladdr of the node, then I am able to access the pods. The ipv4ipiptunneladdr is the node IP (tunl0 interface ip) assigned by calico. – Parvathy Mohan Feb 26 '22 at 19:13
  • It seem that this behavior of not able to access the pod ip from a node other than the owner node is the expected behavior of Kubernetes network policy and it has nothing to do with calico. As per https://kubernetes.io/docs/concepts/services-networking/network-policies/ -> "What you can't do with network policies", "The ability to prevent loopback or incoming host traffic (Pods cannot currently block localhost access, nor do they have the ability to block access from their resident node)." This sounds like a pod can be accessed only from the resident node and not other nodes. – Parvathy Mohan Feb 26 '22 at 19:18
  • @ParvathyMohan You are right concerning Kubernetes network policy, I've updated the answer - please check. – anarxz Feb 28 '22 at 23:22
  • 1
    @anrxz Thank you for taking time to answer my question. Your explanation validates my understanding. – Parvathy Mohan Feb 28 '22 at 23:51