I have 4 servers on the same network;
- 10.0.0.10: Kubernetes master
- 10.0.0.11: Kubernetes node 1
- 10.0.0.12: Kubernetes node 2
- 10.0.0.20: Normal ubuntu server (kubernetes not installed)
I set up a kubernetes cluster following the instruction in https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ using Calico as a network provider.
I could successfully run a pod by the following command (I'm using Ubuntu docker image with ssh access as an example)
kubectl run ubuntupod --image=rastasheep/ubuntu-sshd:16.04
and could see the IP address of this pod using kubectl get
and kubectl describe
(In this case, IP of the pod was 192.168.65.74.)
Then I confirmed that the following connections have been enabled
- kubernetes master/nodes (10.0.0.10 ~ 10.0.0.12) -> the pod (192.168.65.74)
- the pod (192.168.65.74) -> kubernetes master/nodes (10.0.0.10 ~ 10.0.0.12)
- the pod (192.168.65.74) -> normal ubuntu server (10.0.0.20)
However, I failed to make the following connection, which I want to ask people how to do;
- normal ubuntu server (10.0.0.20) -> pod (192.168.65.74)
I tried adding a routing table to the ubuntu server (10.0.0.20) in the hope that the kubernetes master node could be served as a router but with no success;
sudo route add -net 192.168.0.0 netmask 255.255.0.0 gw 10.0.0.10
I suspect that there is something to do with iptables in the Kubernetes master, but I have no idea what to do.
Could someone please help me on this.
BTW, I understand that what I want to do might diverge from the basic principle of the kubernetes or the docker. Maybe I should use Service mechanism of the kubernetes, but I need this sort of transparency in accessing between pods and actual servers.