0

In my OVH Managed Kubernetes cluster I'm trying to expose a NodePort service, but it looks like the port is not reachable via <node-ip>:<node-port>.

I followed this tutorial: Creating a service for an application running in two pods. I can successfully access the service on localhost:<target-port> along with kubectl port-forward, but it doesn't work on <node-ip>:<node-port> (request timeout) (though it works from inside the cluster).

The tutorial says that I may have to "create a firewall rule that allows TCP traffic on your node port" but I can't figure out how to do that.

The security group seems to allow any traffic:

enter image description here

sdabet
  • 18,360
  • 11
  • 89
  • 158

3 Answers3

1

Well i can't help any further i guess, but i would check the following:

  1. Are you using the public node ip address?
  2. Did you configure you service as Loadbalancer properly? https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
  3. Do you have a loadbalancer and set it up properly?
  4. Did you install any Ingress controller? (ingress-nginx?) You may need to add a Daemonset for this ingress-controller to duplicate the ingress-controller pod on each node in your cluster

Otherwise, i would suggest an Ingress, (if this works, you may exclude any firewall related issues).

This page explains very well: What's the difference between ClusterIP, NodePort and LoadBalancer service types in Kubernetes?

  • My goal here is to use a NodePort service, not a LoadBalancer service. (the underlying reason is that I need to expose a UDP service and the LoadBalancer provided by my cloud provider doesn't support UDP). – sdabet Jul 08 '22 at 07:05
1

The solution is to NOT enable "Private network attached" ("réseau privé attaché") when you create the managed Kubernetes cluster.

If you already paid your nodes or configured DNS or anything, you can select your current Kubernetes cluster, and select "Reset your cluster" ("réinitialiser votre cluster"), and then "Keep and reinstall nodes" ("conserver et réinstaller les noeuds") and at the "Private network attached" ("Réseau privé attaché") option, choose "None (public IPs)" ("Aucun (IPs publiques)")

I faced the same use case and problem, and after some research and experimentation, got the hint from the small comment on this dialog box:

By default, your worker nodes have a public IPv4. If you choose a private network, the public IPs of these nodes will be used exclusively for administration/linking to the Kubernetes control plane, and your nodes will be assigned an IP on the vLAN of the private network you have chosen

Now i got my Traefik ingress as a DaemonSet using hostNetwork and every node is reachable directly even on low ports (as you saw yourself, the default security group is open)

Alex F
  • 826
  • 8
  • 18
0

In AWS, you have things called security groups... you may have the same kind of thing in you k8s provider (or even your local machine). Please add those ports to the security groups or local firewalls. In AWS you may need to bind those security groups to your EC2 instance (Ingress node) as well.

  • Thanks for the feedback. Unfortunately I can't figure out how to achieve this with [OVH Managed Kubernetes](https://docs.ovh.com/gb/en/kubernetes/). – sdabet Jun 23 '22 at 09:48
  • I do not know anything about OVH, but look: https://docs.ovh.com/gb/en/public-cloud/configure-security-group-horizon/#step-1-creating-a-security-group OVH seems to have Security groups as well. – Wesley van der Meer Jun 23 '22 at 10:44
  • Thanks for the link. It seems that the security group already allow any incoming/outgoing traffic (see my edit). – sdabet Jun 23 '22 at 12:48