I have two machines within my netwrok which I want communicate from the pod.
Ips are as follows :
10.0.1.23 - Lets call it X
13.0.1.12 - Lets call it Y
When I ssh into the master node or agent node and then do a ping to X or Y, the ping is successful. Therefore the machines are reachable.
Now I create a deployment, I log into the shell of the pod using (kubectl exec -it POD_NAME — /bin/sh
).
Ping to Y is successful. But ping to X fails.
CIDR details :
Master Node : 14.1.255.0/24
Agent Node: 14.2.0.0/16
Pod CIDR:
Agent : 10.244.1.0/24
Master: 10.244.0.0/24
My understanding on what could be the issue :
acs-engine has kube-proxy setup the service network with 10.0.0.0/16 If this is the problem how do i change the kube-proxy cidr?
Additional Info:
I am using acs-engine for my deployment of cluster.
Output for ip route
default via 10.244.1.1 dev eth0
10.244.1.0/24 dev eth0 src 10.244.1.13
Another suspect: On running iptables-save
I see
-A POSTROUTING ! -d 10.0.0.0/8 -m comment --comment "kubenet: SNAT for outbound traffic from cluster" -m addrtype ! --dst-type LOCAL -j MASQUERADE