0

I'm using dual-stack Kubernetes (1.23.2 with Calico as CNI). From one of my pods I want to create a connection to another device which is external to the Kubernetes installation. The connection will pass through an (also external) LB / firewall.

When I try to create an IPv6 connection, I see that the internal pod IP address is used as source address (which is of course outside the Kubernetes cluster not routable). To my understanding the node IP should be used:

For example, if a pod in an overlay network attempts to connect to an IP address outside of the cluster, then the node hosting the pod uses SNAT (Source Network Address Translation) to map the non-routable source IP address of the packet to the node’s IP address before forwarding on the packet. https://projectcalico.docs.tigera.io/about/about-kubernetes-egress

Is there some switch / configuration (parameter) to force the use of the node IP instead of the pod IP?

BTW: This only appears for IPv6 - IPv4 communication used as expected the node IP.

Mark Rotteveel
  • 100,966
  • 191
  • 140
  • 197
Andreas Florath
  • 4,418
  • 22
  • 32
  • Which version of Kubernetes did you use and how did you set up the cluster (your config file)? Did you use bare metal installation or some cloud provider? It is important to reproduce your problem. – Mykola Mar 21 '22 at 12:19
  • The K8S version you find in the question. I'm using kubespray on baremetal. – Andreas Florath Mar 21 '22 at 13:32

1 Answers1

1

It turned out that the root cause was that IPv4 NAT is enabled by default while IPv6 is not when installing calico using kubespray. (It looks that currently even the appropriate config parameter nat_outgoing_ipv6 is not documented.)

To fix the problem, you need to edit the IPv6 IPPool

kubectl edit ippool default-pool-ipv6

and add

natOutgoing: true
Andreas Florath
  • 4,418
  • 22
  • 32