0

I have kubernetes cluster (node01-03). There is a service with nodeport to access a pod (nodeport 31000). The pod is running on node03. I can access the service with http://node03:31000 from any host. On every node I can access the service like http://[name_of_the_node]:31000. But I cannot access the service the following way: http://node01:31000 even though there is a listener (kube-proxy) on node01 at port 31000. The iptables rules look okay to me. Is this how it's intended to work ? If not, how can I further troubleshoot?

micsch87
  • 13
  • 2

2 Answers2

0

If accessing pods within the Kubernetes cluster, you dont need to use the nodeport. Infer the Kubernetes service targetport instead. Say podA needs to access podB through service called serviceB. All you need assuming http is http://serviceB:targetPort/

Baltazar Chua
  • 21
  • 1
  • 2
  • Good to know. I didnt knew that. But I want to know: do I always have to check on which node a pod is running in order to access a service from external? – micsch87 Feb 09 '18 at 12:11
  • You do not have to. K8s will automatically add to iptables the ip addresses of pods. K8s dns will handle resolving service names to actual IPs of pods. All you need to make sure is that kube-dns is running on each node. – Baltazar Chua Feb 10 '18 at 23:32
0

NodePort is exposed on every node in the cluster. https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport clearly says:

each Node will proxy that port (the same port number on every Node) into your Service

So, from both inside and outside the cluster, the service can be accessed using NodeIP:NodePort on any node in the cluster and kube-proxy will route using iptables to the right node that has the backend pod.

However, if the service is accessed using NodeIP:NodePort from outside the cluster, we need to first make sure that NodeIP is reachable from where we are hitting NodeIP:NodePort.

If NodeIP:NodePort cannot be accessed on a node that is not running the backend pod, it may be caused by the default DROP rule on the FORWARD chain (which in turn is caused by Docker 1.13 for security reasons). Here is more info about it. Also see step 8 here. A solution for this is to add the following rule on the node:

iptables -A FORWARD -j ACCEPT

The k8s issue for this is here and the fix is here (the fix should be there in k8s 1.9).

Three other options to enable external access to a service are:

  1. ExternalIPs: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
  2. LoadBalancer with an external, cloud-provider's load-balancer: https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer
  3. Ingress: https://kubernetes.io/docs/concepts/services-networking/ingress/
Vikram Hosakote
  • 3,528
  • 12
  • 23
  • I can reach all nodes, but i cannot connect to [node_ip_where_the _pod_is_not_running]:[nodeport] – micsch87 Feb 13 '18 at 07:54
  • @m0087 is `kube-proxy` running? It should redirect the request to the right backend pod - https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-iptables – Vikram Hosakote Feb 13 '18 at 07:58
  • Yes, and It's working if I am on a node and use the name of the node. E.g. I'm on node01 (the pod runs on node03) and access http://node01:[nodeport]. – micsch87 Feb 13 '18 at 10:30
  • @m0087 I think you are seeing `iptables` issue in docker. Can you try `iptables -A FORWARD -j ACCEPT` on the node and check? I've edited my answer with the required info. Please see the section "If `NodeIP:NodePort` cannot be accessed on a node that is **not** running the backend pod" in my answer above. – Vikram Hosakote Feb 13 '18 at 16:04
  • @m0087 Glad to help. Upvote my answer if you think it is useful so that others know this is the right answer, thanks! – Vikram Hosakote Feb 14 '18 at 16:37