5

I'm trying get a Kubernetes cluster working with some nodes working behind NAT without public IP address. (Why i need it is a different story)

There are 3 nodes:

  1. Kubernetes cluster master (with public IP address)
  2. Node1 (with public IP address)
  3. Node2 (works behind NAT on my laptop as a VM, no public IP address)

All 3 nodes are running Ubuntu 18.04 with Kubernetes v1.10.2(3), Docker 17.12

Kubernetes cluster was created like this:

kubeadm init --pod-network-cidr=10.244.0.0/16

Flannel network is used:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Node1 and Node2 joined the cluster:

NAME STATUS ROLES AGE VERSION master-node Ready master 3h v1.10.2 node1 Ready <none> 2h v1.10.3 node2 Ready <none> 2h v1.10.2

Nginx deployment + service (type=NodePort) created and scheduled for the Node1 (with public IP):

https://pastebin.com/6CrugunB

kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h my-nginx NodePort 10.110.202.32 <none> 80:31742/TCP 16m

This deployment is accessible through http://MASTER_NODE_PUBLIC_IP:31742 and http://NODE1_PUBLIC_IP:31742 as expected.

Another Nginx deployment + service (type=NodePort) created and scheduled for the Node2 (without public IP):

https://pastebin.com/AFK42UNW

kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h my-nginx NodePort 10.110.202.32 <none> 80:31742/TCP 22m nginx-behind-nat NodePort 10.105.242.178 <none> 80:32350/TCP 22m

However this service is not accessible through http://MASTER_NODE_PUBLIC_IP:32350 nor http://NODE1_PUBLIC_IP:32350.

It is only accessible through http://MY_VM_IP:32350 from my laptop.

Moreover: i can not get inside the nginx-behind-nat pods via kubectl exec either.

Is there any way to achieve it?

mennanov
  • 1,195
  • 3
  • 16
  • 27

1 Answers1

1

As mentioned in the Kubernetes documentation:

Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):

  • all containers can communicate with all other containers without NAT
  • all nodes can communicate with all containers (and vice-versa) without NAT
  • the IP that a container sees itself as is the same IP that others see it as

What this means in practice is that you can not just take two computers running Docker and expect Kubernetes to work. You must ensure that the fundamental requirements are met.

By default, the connections from api-server to a node, port or service are just plain HTTP without authentication and encryption.
They can work over HTTPS, but by default, apiserver will not validate the HTTPS endpoint certificate, and therefore, it will not provide any guarantees of integrity and could be subject to man-in-the-middle attacks.

For details about securing connections inside the cluster, please check this document

VAS
  • 8,538
  • 1
  • 28
  • 39
  • 1
    The initial question was more about connectivity itself rather than security, but i think i got the point: the only way to connect nodes behind NAT is to use SSH tunneling, is that correct? If so, could you please update your answer with some elaboration on how to set up SSH tunnels for Kubernetes nodes. Thanks! – mennanov May 22 '18 at 18:16
  • SSH tunnels is deprecated in v 1.9. https://github.com/kubernetes/kubernetes/commit/639e0bfb7a8188d50df2b826e1fa80063fce2eb2#diff-168683a4511ef3635cd821d1cc034d77 https://github.com/kubernetes/kubernetes/issues/48419 – VAS May 23 '18 at 11:25