we currently have following Kubernetes setup (v1.13.1, setup with kubeadm
) with connectivity set up between them:
- Master node (bare metal)
- 5 worker nodes (bare metal)
- 2 worker nodes (cloud)
- There is no proxy in between to access cluster, currently we are accessing services via
hostname:NodePort
We are experiencing issue with accessing services via NodePort
on 2 cloud worker nodes. What is happening is that service is accessible via IPv6, but not via IPv4:
- IPv6: telnet localhost6 30005 Trying ::1... Connected to localhost6. Escape character is '^]'.
- IPv4: telnet localhost4 30005 Trying 127.0.0.1...
Thing is that both are working on bare metal nodes. If I use netstat -napl | grep 30005
, I can see kube-proxy
is listening on this port (tcp6
). I presumed this means that it does not listen on tcp
, but aparently this is not the case (I have same picture on bare metal worker nodes):
tcp6 7 0 :::30005 :::* LISTEN 24658/kube-proxy
I have also read that services are using IPv6, but based on bare metal worker nodes, it seems there should not be a problem using IPv4 there as well.
Any idea what would cause that issue and how to solve it?
Thank you and best regards, Bostjan