1

I can't apply an ingress configuration.

I need access a jupyter-lab service by it's DNS

It's deployed to a 3 node bare metal k8s cluster

  • node1.local (master)
  • node2.local (worker)
  • node3.local (worker)

Flannel is installed as the Network controller

I've installed nginx ingress for bare metal like this

  • kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml

When deployed the jupyter-lab pod is on node2 and the NodePort service responds correctly from http://node2.local:30004 (see below)

I'm expecting that the ingress-nginx controller will expose the ClusterIP service by its DNS name ...... thats what I need, is that wrong?

This is the CIP service, defined with symmetrical ports 8888 to be as simple as possible (is that wrong?)

---
apiVersion: v1
kind: Service
metadata:
  name: jupyter-lab-cip
  namespace: default
spec:
  type: ClusterIP
  ports:
    - port: 8888
      targetPort: 8888
  selector:
    app: jupyter-lab
  • The DNS name jupyter-lab.local resolves to the ip address range of the cluster, but times out with no response. Failed to connect to jupyter-lab.local port 80: No route to host

  • firewall-cmd --list-all shows that port 80 is open on each node

This is the ingress definition for http into the cluster (any node) on port 80. (is that wrong ?)

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: jupyter-lab-ingress
  annotations:
    # nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io: /
spec:
  rules:
  - host: jupyter-lab.local
    http:                       
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: jupyter-lab-cip
            port:
              number: 80

This the deployment

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jupyter-lab-dpt
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jupyter-lab
  template:
    metadata:
      labels:
        app: jupyter-lab
    spec:
      volumes:
        - name: jupyter-lab-home
          persistentVolumeClaim:
            claimName: jupyter-lab-pvc
      containers:
        - name: jupyter-lab
          image: docker.io/jupyter/tensorflow-notebook
          ports:
            - containerPort: 8888
          volumeMounts:
            - name: jupyter-lab-home
              mountPath: /var/jupyter-lab_home
          env:
            - name: "JUPYTER_ENABLE_LAB"
              value: "yes"

I can successfully access jupyter-lab by its NodePort http://node2:30004 with this definition:

---
apiVersion: v1
kind: Service
metadata:
  name: jupyter-lab-nodeport
  namespace: default
spec:
  type: NodePort
  ports:
    - port: 10003
      targetPort: 8888
      nodePort: 30004
  selector:
    app: jupyter-lab

How can I get ingress to my jupyter-lab at http://jupyter-lab.local ???

  • the command kubectl get endpoints -n ingress-nginx ingress-nginx-controller-admission returns :

ingress-nginx-controller-admission 10.244.2.4:8443 15m


Am I misconfiguring ports ?

Are my "selector:appname" definitions wrong ?

Am I missing a part

How can I debug what's going on ?


Other details

  • I was getting this error when applying an ingress kubectl apply -f default-ingress.yml

    Error from server (InternalError): error when creating "minnimal-ingress.yml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-contr
    oller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s": context deadline exceeded
    

    This command kubectl delete validatingwebhookconfigurations --all-namespaces removed the validating webhook ... was that wrong to do?

  • I've opened port 8443 on each node in the cluster

Kickaha
  • 3,680
  • 6
  • 38
  • 57
  • 1
    Did you check if the port 8443 looks opened from the node where the ingress controller Pod is currently running? – AndD Feb 12 '21 at 20:50
  • 1
    Does the `kubectl get endpoints -n ingress-nginx ingress-nginx-controller-admission` show any ip addresses in ENDPOINTS column? – Matt Feb 15 '21 at 11:35
  • I'd managed to progress by deleting the validating webhook. `kubectl get validatingwebhookconfigurations --all-namespaces` `kubectl delete validatingwebhookconfigurations` But there was still no response @ http://jupyter-lab.local The cluster has been reset, I'll try these suggestions – Kickaha Feb 15 '21 at 11:48
  • @AndD 8443 is open on all nodes – Kickaha Feb 15 '21 at 12:22
  • @Matt Yes there is an ip address in the ENDPOINTS column, (see above) – Kickaha Feb 15 '21 at 12:39
  • what is your k8s version? – Matt Feb 15 '21 at 13:06
  • @Matt The most recent I believe, `kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:25:59Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"} 1.20.2` – Kickaha Feb 15 '21 at 13:11

1 Answers1

2

Ingress is invalid, try the following:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: jupyter-lab-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: jupyter-lab.local
    http:                       # <- removed the -
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
#            name: jupyter-lab-cip
            name: jupyter-lab-nodeport
            port:
              number: 8888
---
apiVersion: v1
kind: Service
metadata:
  name: jupyter-lab-cip
  namespace: default
spec:
  type: ClusterIP
  ports:
    - port: 8888
      targetPort: 8888
  selector:
    app: jupyter-lab

If I understand correctly, you are trying to expose jupyternb through ingress nginx proxy and to make it accessible through port 80.

Run the folllowing command to check what nodeport is used by nginx ingress service:

$ kubectl get svc -n ingress-nginx ingress-nginx-controller
NAME                       TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller   NodePort   10.96.240.73   <none>        80:30816/TCP,443:31475/TCP   3h30m

In my case that is port 30816 (for http) and 31475 (for https).

Using NodePort type you can only use ports in range 30000-32767 (k8s docs: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport). You can change it using kube-apiserver flag --service-node-port-range and then set it to e.g. 80-32767 and then in your ingress-nginx-controller service set nodePort: 80

apiVersion: v1
kind: Service
metadata:
  annotations: {}
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/version: 0.44.0
    helm.sh/chart: ingress-nginx-3.23.0
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: http
    nodePort: 80         # <- HERE
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
    nodePort: 443         # <- HERE
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: NodePort

Although this is genereally not advised to change service-node-port-range since you may encounter some issues if you use ports that are already open on nodes (e.g. port 10250 that is opened by kubelet on every node).


What might be a better solution is to use MetalLB.


EDIT:

How can I get ingress to my jupyter-lab at http://jupyter-lab.local ???

Assuming you don't need a failure tolerant solution, download the https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml file and change ports: section for the deployment object like following:

  ports:
    - name: http
      containerPort: 80
      hostPort: 80         # <- add this line
      protocol: TCP
    - name: https
      containerPort: 443
      hostPort: 443        # <- add this line
      protocol: TCP
    - name: webhook
      containerPort: 8443
      protocol: TCP

and apply the changes:

kubectl apply -f deploy.yaml

Now run:

$ kubectl get po -n ingress-nginx ingress-nginx-controller-<HERE PLACE YOUR HASH>  -owide
NAME                                        READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
ingress-nginx-controller-67897c9494-c7dwj   1/1     Running   0          97s   172.17.0.6   <node_name>   <none>           <none>

Notice the <node_name> in NODE column. This is a node's name where the pod got scheduled. Now take this nodes IP and add it to your /etc/hosts file.

It should work now (go to http://jupyter-lab.local to check it), but this solution is fragile and if nginx ingress controller pod gets rescheduled to other node it will stop working (and it will stay lik this until you change the ip in /etc/hosts file). It's also generally not advised to use hostPort: field unless you have a very good reason to do so, so don't abuse it.


If you need failure tolerant solution, use MetalLB and create a service of type LoadBalancer for nginx ingress controller.

I haven't tested it but the following should do the job, assuming that you correctly configured MetalLB:

kubectl delete svc -n ingress-nginx ingress-nginx-controller
kubectl expose deployment -n ingress-nginx ingress-nginx-controller --type LoadBalancer
Matt
  • 7,419
  • 1
  • 11
  • 22
  • I have tried this, but am getting `Failed to connect to jupyter-lab.local port 80: No route to host` from curl. DNS is resolving jupyter-lab.local to all of my node IP addresses. Is this a fire wall issue, Do I need port 80 open on all nodes? – Kickaha Feb 15 '21 at 13:14
  • How did you setup your dns? What is it pointing to? Does it work when you use curl with `-H "Host: jupyter-lab.local"` and ip addresss directly, without using domain name? (by *work* I mean returns some html, and not errors) – Matt Feb 15 '21 at 13:21
  • In my case it looks like following: `curl "192.168.39.67:30816/lab?token=77255d9011e7532341438d1d924fb4c71d654c350bb724d6" -H "Host: jupyter-lab.local"`, where 192.168.39.67 IP of my minikube VM ip and 30816 is nodeport of ingress nginx service, and of course you need to fing your own token – Matt Feb 15 '21 at 13:25
  • I have also noticed that you had some ports messed up, try applying the yaml from my answer after edit – Matt Feb 15 '21 at 13:26
  • I've tried reapplying like you suggested and added more detail to my question re host lookups ... fyi -H is for adding a header – Kickaha Feb 15 '21 at 13:53
  • I know its for adding header – Matt Feb 15 '21 at 13:54
  • Thanks for the help, and the suggestion. Before I quit and restart with MetalLB I'll rewrite my question. ( I know more now and can be clearer). I'll pop another comment when I'm ready and maybe you would have another look. Maybe the rewrite will help me solve my issue! .. oh yeah I did curl the IP while specifying the host name (thanks for that tip, who needs a DNS server anyway) it responded with HTML -- a 404 but html... so that's something new. – Kickaha Feb 16 '21 at 13:39
  • I made my update, I suspect I'm missing a concept. – Kickaha Feb 18 '21 at 14:11
  • 1
    Checkout my edit. Also read more about [k8s services](https://kubernetes.io/docs/concepts/services-networking/service/) and [k8s dns](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) because I am not sure you get all concepts correctly. – Matt Feb 18 '21 at 15:15
  • Hi, I checked your edit. It's the fault tolerant solution I need. I installed MetalLB and added a config map. But I'm not sure what IP addresses to use. Should I use the IPs of the nodes? Or does MetalLB act like a network interface that receives IP traffic on the range I set? I'm also wondering if the command you use to expose the deployment can be a yaml? – Kickaha Mar 08 '21 at 13:51
  • [Docs: Address Allocation](https://metallb.universe.tf/concepts/#address-allocation) So use any private addresses from the same subnet as your LAN subnet. But don't use node addresses. – Matt Mar 08 '21 at 14:45
  • answering to your second question: [docs: layer-2-mode-arp-ndp](https://metallb.universe.tf/concepts/#layer-2-mode-arp-ndp). *"In layer 2 mode, one machine in the cluster takes ownership of the service, and uses standard address discovery protocols (ARP for IPv4, NDP for IPv6) to make those IPs reachable on the local network"* – Matt Mar 08 '21 at 14:46
  • 1
    Answering to your third question: yes you can make a yaml from a command. To get a yaml out of it, run the command with `--dry-run=client -oyaml` flags – Matt Mar 08 '21 at 14:48
  • Thanks for the help, setting up MetalLB was the answer I needed. Once I got it configured correctly. One last query if it's okay. Do I need to keep the CIP service and the Ingress and the LoadBalancer, or do I only need 2 of them? – Kickaha Mar 11 '21 at 11:47
  • 1
    loadbalancer should be pointing to ingress controller, ingress controller configured with ingress object points to CIP service, CIP service points to jupyter. So yes, you need all of them. – Matt Mar 11 '21 at 11:56
  • I've made another question, would you have a look? https://stackoverflow.com/questions/66585963/how-to-configure-a-k8s-reverse-proxy-service-with-metallb – Kickaha Mar 11 '21 at 15:55
  • Thanks a lot for the hostPort suggestion. I am still testing a single-node cluster myself and I had no idea why it wouldn't work! – Lethargos Sep 30 '21 at 19:59