0

I have a master node and two worker node kubernetes cluster. I see all pods running fine. When I run iptable rules, restart of pod is failing. Calico is used for networking.

kuberuntime_sandbox.go:54] CreatePodSandbox for pod "" failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "" network: context deadline exceeded

I opened the following ports Master (to accept traffic from worker node)

179,9099,5000,10248-10252,443,2379-2380

Worker (to accept traffic from master node)

179,9099,10248-10252,10250,2379-2380

I see the issue with above rules. When I open all ports, between the nodes, then I see the failed pod running. Can you please let me know whether do we need to have all ports opened between the nodes ? Or am I missing any port ?

Prafull Ladha
  • 12,341
  • 2
  • 37
  • 58
Deepa Yr
  • 33
  • 3
  • 1
    Does it help https://kubernetes.io/docs/setup/independent/#check-required-ports? – Steephen Jan 07 '19 at 17:15
  • One should never have to open all ports between Kubernetes nodes to have a properly functioning cluster. However it doesn't seem like the set of open ports that you listed matches the ports that need to be opened like @steephen commented – brianSan Jan 07 '19 at 19:07
  • The ports mentioned by Steephen are opened in my list except 6443 as I did not see that port listed in netstat command. With above ports, the pods are not coming up. – Deepa Yr Jan 08 '19 at 16:27
  • the question is unclear. Why do you want to open those ports? Can you provide the results of the `kubectl describe pod pod_name` of the failed pods? – aurelius Jan 08 '19 at 17:13
  • I would like to rephrase my question. Initially I installed the kubernetes and my application. All pods are running fine. The nodes where kubernetes is running has both public and private ips. I added iptable rules on master node to accept the traffic from worker nodes on private network. I deleted one of the application pod. New instance of pod got created, but stuck in container creating state. When I checked, the communication from worker node is coming from public ip and the packet is getting dropped. Can some one please suggest how i can make the request to come on private ip. – Deepa Yr Jan 09 '19 at 14:30
  • 1
    run `kubectl describe pod` so we can see why is it in container creating state. Also try reverting the iptables changes. – aurelius Jan 16 '19 at 14:10

0 Answers0