1

Environment: Ubuntu 18.06 bare metal, set up the cluster with kubeadm (single node)

I want to access the cluster via port 80. Currently I am able to access it via the nodePort: domain.com:31668/ but not via port 80. I am using metallb Do I need something else to handle incoming traffic?

So the current topology would be:

LoadBalancer > Ingress Controller > Ingress > Service

kubectl -n ingress-nginx describe service/ingress-nginx:

Name:                     ingress-nginx
Namespace:                ingress-nginx
Labels:                   app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
Annotations:              <none>
Selector:                 app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
Type:                     LoadBalancer
IP:                       10.99.6.137
LoadBalancer Ingress:     192.168.1.240
Port:                     http  80/TCP
TargetPort:               80/TCP
NodePort:                 http  31668/TCP
Endpoints:                192.168.0.8:80
Port:                     https  443/TCP
TargetPort:               443/TCP
NodePort:                 https  30632/TCP
Endpoints:                192.168.0.8:443
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason       Age   From                Message
  ----    ------       ----  ----                -------
  Normal  IPAllocated  35m   metallb-controller  Assigned IP "192.168.1.240"

As I am using a bare metal environment I am using metallb.

metallb config:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.1.240-192.168.1.250

Ingress controller yml's:

apiVersion: v1 kind: Namespace metadata:   name: ingress-nginx   labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap apiVersion: v1 metadata:   name: nginx-configuration   namespace: ingress-nginx   labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

--- kind: ConfigMap apiVersion: v1 metadata:   name: tcp-services   namespace: ingress-nginx   labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

--- kind: ConfigMap apiVersion: v1 metadata:   name: udp-services   namespace: ingress-nginx   labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

--- apiVersion: v1 kind: ServiceAccount metadata:   name: nginx-ingress-serviceaccount   namespace: ingress-nginx   labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

--- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata:   name: nginx-ingress-clusterrole   labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update

--- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata:   name: nginx-ingress-role   namespace: ingress-nginx   labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

--- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata:   name: nginx-ingress-role-nisa-binding   namespace: ingress-nginx   labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx roleRef:   apiGroup: rbac.authorization.k8s.io   kind: Role   name: nginx-ingress-role subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

--- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata:   name: nginx-ingress-clusterrole-nisa-binding   labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx roleRef:   apiGroup: rbac.authorization.k8s.io   kind: ClusterRole   name: nginx-ingress-clusterrole subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: extensions/v1beta1 kind: Deployment metadata:   name: nginx-ingress-controller   namespace: ingress-nginx   labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx spec:   replicas: 1   selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx   template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1

---

output of curl -v http://192.168.1.240 (executing inside the server)

* Rebuilt URL to: http://192.168.1.240/
*   Trying 192.168.1.240...
* TCP_NODELAY set
* Connected to 192.168.1.240 (192.168.1.240) port 80 (#0)
> GET / HTTP/1.1
> Host: 192.168.1.240
> User-Agent: curl/7.61.0
> Accept: */*
> 
< HTTP/1.1 404 Not Found
< Server: nginx/1.15.6
< Date: Thu, 27 Dec 2018 19:03:28 GMT
< Content-Type: text/html
< Content-Length: 153
< Connection: keep-alive
< 
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.15.6</center>
</body>
</html>
* Connection #0 to host 192.168.1.240 left intact

kubectl describe ingress articleservice-ingress

Name:             articleservice-ingress
Namespace:        default
Address:          192.168.1.240
Default backend:  default-http-backend:80 (<none>)
Rules:
  Host        Path  Backends
  ----        ----  --------
  host.com  
              /articleservice   articleservice:31001 (<none>)
Annotations:
  nginx.ingress.kubernetes.io/rewrite-target:  /
Events:                                        <none>

curl -vH 'host: elpsit.com' http://192.168.1.240/articleservice/system/ipaddr

I can reach the ingress as expected from inside the server.
old_timer
  • 69,149
  • 8
  • 89
  • 168
elp
  • 840
  • 3
  • 12
  • 36
  • What is the output of `curl -v http://192.168.1.240` from your workstation? – mdaniel Dec 26 '18 at 19:57
  • Hi and thanks for reading. Attached the information above. – elp Dec 27 '18 at 19:04
  • Well, then it appears to be working as expected; if you actually define an `Ingress` resource, then you can try it out using `curl -vH 'host: some-virtual-host-you-define.example.com-orwhatever' http://192.168.1.240` I would expect the content of your Ingress-ed Service to materialize – mdaniel Dec 27 '18 at 19:24
  • All though just for clarity, I specifically asked _from your workstation_ since asking from inside the cluster can lead to a false positive for any number of reasons; what I'd want to know is whether machines that are entirely separate from the cluster can connect successfully – mdaniel Dec 27 '18 at 19:25
  • No, totally different machines can not connect successfully, as the domain != ingress IP. So a better fitting question would be: how do I redirect incoming traffic on the domain to the ingress IP? Sorry, hope it is not confusing :) – elp Dec 28 '18 at 10:14
  • That is confusing, given that the way one maps domains to IP addresses is via a DNS `A` record, and has nothing to do with ingresses or anything in kubernetes. Did you already specify [an Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource) like I asked? – mdaniel Dec 28 '18 at 22:54
  • Sorry, server IP != ingress IP. Anyway, I did already defined an ingress controller / ingress. As host I used the domain I actually use. But somehow I cannot curl it via the way you told me. – elp Dec 29 '18 at 09:49
  • The command you used to test is invalid; the `host:` header is just that: only the **host**. It should not contain any path information. The correct command is `curl -vH 'host: host.com' http://192.168.1.240/articleservice/article/articlesystem/ipaddr` – mdaniel Dec 29 '18 at 18:32
  • Sorry for missunderstandings. Got it now. The main question in here still remains: how to route incoming traffic on the `serverIP:80` to the load balancer? And is there a way to avoid defining the host? Else it would not be possible to simply reach the cluster via the browser (port 80)? – elp Dec 30 '18 at 08:45
  • I regret that you've been tasked with setting this system up without understanding the success criteria. If you create a DNS entry for `host.com` that points to 192.168.1.240, your browser will automatically send the `host:` header -- it does that 100% of the time. If you want to try out what I'm saying, you can also edit the `/etc/hosts` (or `%WINDIR%\system32\drivers\etc\hosts`) on your machine, adding `192.168.1.240 host.com`, then your browser will think `host.com` really exists, and will go there. That's the process we're mimicking with `curl` because it's a boatload less typing – mdaniel Dec 30 '18 at 17:40
  • As for the port 80, that's the process you're seeing with curl above -- you can also teach the ingress controller to serve TLS certificates, too, and then if it is listening on :443 `https://host.com` will work just as well (it's very likely already listening on 443, but without any TLS configuration, it's very likely using a self-signed certificate just for demo purposes) – mdaniel Dec 30 '18 at 17:42

0 Answers0