1

I'm trying to expose kubernetes dashboard publicly via an ingress on a single master bare-metal cluster. The issue is that the LoadBalancer (nginx ingress controller) service I'm using is not opening the 80/443 ports which I would expect it to open/use. Instead it takes some random ports from the 30-32k range. I know I can set this range with --service-node-port-range but I'm quite certain I didn't have to do this a year ago on another server. Am I missing something here?

Currently this is my stack/setup (clean install of Ubuntu 16.04):

  • Nginx Ingress Controller (installed via helm)
  • MetalLB
  • Kubernetes Dashboard
  • Kubernetes Dashboard Ingress to deploy it publicly on <domain>
  • Cert-Manager (installed via helm)

k8s-dashboard-ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    # add an annotation indicating the issuer to use.
    cert-manager.io/cluster-issuer: letsencrypt-staging
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/secure-backends: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
  name: kubernetes-dashboard-ingress
  namespace: kubernetes-dashboard
spec:
  rules:
  - host: <domain>
    http:
      paths:
      - backend:
          serviceName: kubernetes-dashboard
          servicePort: 443
        path: /
  tls:
  - hosts:
    - <domain>
    secretName: kubernetes-dashboard-staging-cert

This is what my kubectl get svc -A looks like:

NAMESPACE              NAME                            TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                      AGE
cert-manager           cert-manager                    ClusterIP      10.101.142.87    <none>          9402/TCP                     23h
cert-manager           cert-manager-webhook            ClusterIP      10.104.104.232   <none>          443/TCP                      23h
default                kubernetes                      ClusterIP      10.96.0.1        <none>          443/TCP                      6d6h
ingress-nginx          nginx-ingress-controller        LoadBalancer   10.100.64.210    10.65.106.240   80:31122/TCP,443:32697/TCP   16m
ingress-nginx          nginx-ingress-default-backend   ClusterIP      10.111.73.136    <none>          80/TCP                       16m
kube-system            kube-dns                        ClusterIP      10.96.0.10       <none>          53/UDP,53/TCP,9153/TCP       6d6h
kubernetes-dashboard   cm-acme-http-solver-kw8zn       NodePort       10.107.15.18     <none>          8089:30074/TCP               140m
kubernetes-dashboard   dashboard-metrics-scraper       ClusterIP      10.96.228.215    <none>          8000/TCP                     5d18h
kubernetes-dashboard   kubernetes-dashboard            ClusterIP      10.99.250.49     <none>          443/TCP                      4d6h

Here are some more examples of what's happening:

  1. curl -D- http://<public_ip>:31122 -H 'Host: <domain>'

    • returns 308, as the protocol is http not https. This is expected
  2. curl -D- http://<public_ip> -H 'Host: <domain>'

    • curl: (7) Failed to connect to <public_ip> port 80: Connection refused
    • port 80 is closed
  3. curl -D- --insecure https://10.65.106.240 -H "Host: <domain>"

    • reaching the dashboard through an internal IP obviously works and I get the correct k8s-dashboard html.
    • --insecure is due to the let's encrypt not working yet as the acme challenge on port 80 is unreachable.

So to recap, how do I get 2. working? E.g. reaching the service through 80/443?

EDIT: Nginx Ingress Controller .yaml

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2020-02-12T20:20:45Z"
  labels:
    app: nginx-ingress
    chart: nginx-ingress-1.30.1
    component: controller
    heritage: Helm
    release: nginx-ingress
  name: nginx-ingress-controller
  namespace: ingress-nginx
  resourceVersion: "1785264"
  selfLink: /api/v1/namespaces/ingress-nginx/services/nginx-ingress-controller
  uid: b3ce0ff2-ad3e-46f7-bb02-4dc45c1e3a62
spec:
  clusterIP: 10.100.64.210
  externalTrafficPolicy: Cluster
  ports:
  - name: http
    nodePort: 31122
    port: 80
    protocol: TCP
    targetPort: http
  - name: https
    nodePort: 32697
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app: nginx-ingress
    component: controller
    release: nginx-ingress
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 10.65.106.240

EDIT 2: metallb configmap yaml

kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      -  10.65.106.240-10.65.106.250
David
  • 2,109
  • 1
  • 22
  • 27
  • Which chart did you used? The default value should be 80:80/TCP,443:443/TCP with the official helm chart. Can you add the yaml of the nginx-ingress-controller service? – Jean-Philippe Bond Feb 13 '20 at 00:57
  • @Jean-PhilippeBond `helm install nginx-ingress --namespace ingress-nginx stable/nginx-ingress ` this is the exact command I've used for the ingress controller. How do I check the current yaml? With `kubectl edit` ? – David Feb 13 '20 at 01:04
  • 1
    `kubectl get svc nginx-ingress-controller -n ingress-nginx -o yaml` – Jean-Philippe Bond Feb 13 '20 at 01:11
  • I've edited my question with the ingress controller yaml file. – David Feb 13 '20 at 01:22
  • The result is kind of weird, the result should be nodePort: 80 based on the chart default value. Try overwriting it to see what it do : `helm install nginx-ingress --namespace ingress-nginx stable/nginx-ingress --set controller.service.nodePorts.http=80, controller.service.nodePorts.https=443` – Jean-Philippe Bond Feb 13 '20 at 01:44
  • I've tried this a few hours ago but with `=http` and `=https`, this was the output. `Error: Service "nginx-ingress-controller" is invalid: spec.ports[0].nodePort: Invalid value: 80: provided port is not in the valid range. The range of valid ports is 30000-32767` That being said I don't think I should be editing the port range right? I'm sure I didn't have to do this the last time. – David Feb 13 '20 at 01:51
  • OK now I understand a little bit more (I never worked with MetalLB), it doesn't work like cloud LB. Have you read this? https://kubernetes.github.io/ingress-nginx/deploy/baremetal/. It is not recommended to change --service-node-port-range. – Jean-Philippe Bond Feb 13 '20 at 02:09
  • Yup I've read that. That's how my MetalLB is deployed / set up. – David Feb 13 '20 at 02:11
  • @Jean-PhilippeBond is [this](https://github.com/helm/charts/blob/c7fe9999d18b4bd774afa6d46b7336f9926005bf/stable/nginx-ingress/values.yaml#L280) the default value you're talking about? I don't know what the quotation marks represent to be honest. – David Feb 13 '20 at 02:18
  • It means that it is empty. I don't know exactly what you are trying to do but it seems that you'll need to do a little bit more work to be able to use a public IP if none of your nodes has a public IP address. Something like : https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#using-a-self-provisioned-edge. – Jean-Philippe Bond Feb 13 '20 at 02:30
  • I just want to access the LoadBalancer on port 80 and 443 via my public IP which I have. Instead of the two random ports. (I'm using a single node btw) – David Feb 13 '20 at 02:33
  • Yes, I understand that but it seems that there is limitation with MetalLB if the IP of your node is not public. – Jean-Philippe Bond Feb 13 '20 at 02:47
  • I've had this working about a year ago but the server is now gone. Literally the same setup and same server hosting :/ – David Feb 13 '20 at 02:51
  • Answering your question "but I'm quite certain I didn't have to do this a year ago on another server", yes, you had to. Nothing has changed in this regard. – suren Feb 13 '20 at 07:42
  • Can we ask for a metalLB config file? Additionally, you have created service as LoadBalancer, but use spec.ports.nodePort under ports (instead port and targetPort as specified in k8s documentation) – Nick Feb 13 '20 at 12:04
  • @Nick I've added the metalLB config. Your comment about the ports sounds like I'm doing that on purpose or setting them somewhere but that's not the case. The service is created from the official helm chart (specific command is in my first comment of this thread) – David Feb 13 '20 at 14:31
  • @suren so what are my options to make it work on http/https ports? The self-provisioned edge method has a significant drawback. – David Feb 13 '20 at 14:33
  • 1
    I'm going to reproduce that. That is why I've asked for a config. – Nick Feb 13 '20 at 14:33
  • 1
    @dvdblk you have 2 entrypoints to your `Ingress Controller`. (1) through `10.65.106.240` and (2) through `node_ip:31122|32697`. Everythin else is going to fail. What is for you your `public_ip`; in your question? – suren Feb 13 '20 at 15:34
  • @suren the `public_ip` is the IP of the scaleway server (i.e. I'm also using the `public_ip` to connect to the server through ssh) – David Feb 13 '20 at 15:37
  • @dvdblk no. that's not where you should curl. I mean, that's your host, so it is where you should curl if you do it on port 31122 or 32697. Try this command: `kubectl run -it --restart=Never --rm curler --image viejo/curl -- curl 10.65.106.240`. This should return ok because you are curling the endpoint from within the cluster. – suren Feb 13 '20 at 16:04
  • @suren yup, returns 308. Which is expected as the k8s dashboard should redirect to https as indicated by the k8s-dashboard.yaml. Basically your command is my example nr. `3.` if it didn't have `https` and `--insecure` option. :) – David Feb 13 '20 at 17:28
  • Oh and regarding the host situation. **I want to** curl the host through port 80/443. In the end I would like to access my k8s dashboard and other k8s services via a browser. That's the bigger picture. Also, the setup I had on the previous server was working smoothly. On the other hand, it might not have used ingress ctl as `type: LoadBalancer` considering that there was no change in how the LB works. But I know I had MetalLB running because I had a screenshot of the deployments before I shut the server down. /shrug – David Feb 13 '20 at 17:31
  • @dvdblk from the networking perspective it is a little different. The behavior of your cluster is just normal. you can use `hostNetwork: true`; on port 80 and 443, if you want to quickly solve your 2nd example. – suren Feb 13 '20 at 19:10
  • @suren this is exactly what I was hoping for! Do you mind explaining a bit of the details how this differs from `hostNetwork: false`? I can accept it as the answer after :) – David Feb 14 '20 at 03:59

2 Answers2

1

So, to solve the 2nd question, as I suggested, you can use hostNetwork: true parameter to map container port to the host it is running on. Note that this is not a recommended practice, and you should always avoid to do this, unless you have a reason.

Example:

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  hostNetwork: true
  containers:
  - name: nginx
    image: nginx
    ports:
    - containerPort: 80
      hostPort: 80           # this parameter is optional, but recommended when using host network
      name: nginx

When I deploy this yaml, I can check where the pod is running and curl that host's port 80.

root@v1-16-master:~# kubectl get po -owide
NAME                     READY   STATUS    RESTARTS   AGE     IP                NODE             NOMINATED NODE   READINESS GATES
nginx                    1/1     Running   0          105s    10.132.0.50       v1-16-worker-2   <none>           <none>

Note: now I know the pod is running on worker node 2. I just need its IP address.

root@v1-16-master:~# kubectl get no -owide
NAME             STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION    CONTAINER-RUNTIME
v1-16-master     Ready    master   52d   v1.16.4   10.132.0.48   xxxx        Ubuntu 16.04.6 LTS   4.15.0-1052-gcp   docker://19.3.5
v1-16-worker-1   Ready    <none>   52d   v1.16.4   10.132.0.49   xxxx        Ubuntu 16.04.6 LTS   4.15.0-1052-gcp   docker://19.3.5
v1-16-worker-2   Ready    <none>   52d   v1.16.4   10.132.0.50   xxxx        Ubuntu 16.04.6 LTS   4.15.0-1052-gcp   docker://19.3.5
v1-16-worker-3   Ready    <none>   20d   v1.16.4   10.132.0.51   xxxx        Ubuntu 16.04.6 LTS   4.15.0-1052-gcp   docker://19.3.5
root@v1-16-master:~# curl 10.132.0.50 2>/dev/null | grep title
<title>Welcome to nginx!</title>
root@v1-16-master:~# kubectl delete po nginx
pod "nginx" deleted
root@v1-16-master:~# curl 10.132.0.50
curl: (7) Failed to connect to 10.132.0.50 port 80: Connection refused

And of course it also works if I go to the public IP on my browser.

suren
  • 7,817
  • 1
  • 30
  • 51
  • If this is not the recommended way then I'm wondering what actually is if someone is having a bare-metal setup. All the methods seem to have drawbacks and deploying an echo app on port 80 should be pretty common right? – David Feb 14 '20 at 12:44
  • Also it seems that with `hostNetwork: true` the MetalLB is not needed at all. – David Feb 14 '20 at 13:07
  • Do you think that adding an external loadbalancer via scaleway, removing metalLB and `hostNetwork: false` would make this work in the same manner without the drawbacks of hostNet? – David Feb 14 '20 at 13:31
  • 1
    yes. If you would set up a normal LB and point it to ports 31122 and 32697, it should work. That's actually how all LBs work on cloud providers. – suren Feb 14 '20 at 13:54
  • Just confirming that it works as intended with a normal LB. Thanks! – David Feb 14 '20 at 18:54
  • @suren Why hostNetwork: true is not recommended approach, what kind of problems we may face? If we use 80/443 via hostNetwork or NodePorts both get exposed from baremetal. – ImranRazaKhan Mar 07 '20 at 21:23
  • Because then you are giving that pod abilities to access host network activities, which, from security perspective, is not a good idea. – suren Mar 07 '20 at 21:47
  • @suren metallb pod too have abilities to access host network activities. – ImranRazaKhan Mar 08 '20 at 19:31
  • the functionality exists, therefor there are use cases for it. These are general advises. – suren Mar 09 '20 at 09:35
-2

update:

i didn't see the edit part of the question when I was writing this answer. it doesn't make sense given the additional info provided. please disregard.

original:

apparently the cluster you are using now has its ingress controller setup over a node-port type service instead of a load-balancer. in order to get desired behavior you need to change configuration of ingress-controller. refer to nginx ingress controller documentation for metalLB cases how to do this.

morgwai
  • 2,513
  • 4
  • 25
  • 31