0

Im trying to run K8S cluster on bare metal servers and I have a problem accessing the apps (PODs) from the outside of cluster.

Just to sumarize my setup

  • nodeported ingress nginx controller with ports for HTTP (30255) and HTTPS (32673) runnin on one node. Netstat shows the ports are open
  • simple Hello World pod which runs on port 8080 (tested via nodeport that it actually works)

When I try to open the site through the nodeported ingress nginx I receive error 404.

curl --verbose --header 'Host: hello-world.info' http://localhost:30255/

*   Trying 127.0.0.1:30255...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 30255 (#0)
> GET / HTTP/1.1
> Host: hello-world.info
> User-Agent: curl/7.68.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Not Found
< Date: Mon, 20 Feb 2023 00:58:36 GMT
< Content-Type: text/html
< Content-Length: 146
< Connection: keep-alive
< 
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
* Connection #0 to host localhost left intact

I have no idea what Im doing wrong, because it should be just straight forward. Please see the configurations bellow

Configuration of the ingress for the helloworld app ("WEB" service)

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"nginx.ingress.kubernetes.io/rewrite-target":"/$1"},"name":"example-ingress","namespace":"default"},"spec":{"rules":[{"host":"hello-world.info","http":{"paths":[{"backend":{"service":{"name":"web","port":{"number":8080}}},"path":"/","pathType":"Prefix"}]}}]}}
    nginx.ingress.kubernetes.io/rewrite-target: /$1
  creationTimestamp: "2023-02-20T00:57:59Z"
  generation: 1
  managedFields:
  - apiVersion: networking.k8s.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
          f:nginx.ingress.kubernetes.io/rewrite-target: {}
      f:spec:
        f:rules: {}
    manager: kubectl-client-side-apply
    operation: Update
    time: "2023-02-20T00:57:59Z"
  name: example-ingress
  namespace: default
  resourceVersion: "11184494"
  uid: 24e8a8c6-0b44-49e6-bef8-23f51ab549a6
spec:
  rules:
  - host: hello-world.info
    http:
      paths:
      - backend:
          service:
            name: web
            port:
              number: 8080
        path: /
        pathType: Prefix
status:
  loadBalancer: {}

YAML for the "WEB" service of the hello world app

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2023-02-19T23:25:50Z"
  labels:
    app: web
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          .: {}
          f:app: {}
      f:spec:
        f:externalTrafficPolicy: {}
        f:internalTrafficPolicy: {}
        f:ports: {}
        f:selector: {}
        f:sessionAffinity: {}
    manager: kubectl-expose
    operation: Update
    time: "2023-02-19T23:25:50Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        f:allocateLoadBalancerNodePorts: {}
        f:ports:
          k:{"port":8080,"protocol":"TCP"}:
            .: {}
            f:port: {}
            f:protocol: {}
            f:targetPort: {}
        f:type: {}
    manager: agent
    operation: Update
    time: "2023-02-20T00:21:28Z"
  name: web
  namespace: default
  resourceVersion: "11174849"
  uid: 68a3fb17-61c0-46c1-9c5f-fb120f66fcc0
spec:
  clusterIP: 10.105.221.158
  clusterIPs:
  - 10.105.221.158
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: web
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

YAML for the nginx ingress controller service

apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx","app.kubernetes.io/version":"1.5.1"},"name":"ingress-nginx-controller","namespace":"ingress-nginx"},"spec":{"ipFamilies":["IPv4"],"ipFamilyPolicy":"SingleStack","ports":[{"appProtocol":"http","name":"http","port":80,"protocol":"TCP","targetPort":"http"},{"appProtocol":"https","name":"https","port":443,"protocol":"TCP","targetPort":"https"}],"selector":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.kubernetes.io/name":"ingress-nginx"},"type":"NodePort"}}
  creationTimestamp: "2023-01-24T15:55:44Z"
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.5.1
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:kubectl.kubernetes.io/last-applied-configuration: {}
        f:labels:
          .: {}
          f:app.kubernetes.io/component: {}
          f:app.kubernetes.io/instance: {}
          f:app.kubernetes.io/name: {}
          f:app.kubernetes.io/part-of: {}
          f:app.kubernetes.io/version: {}
      f:spec:
        f:externalTrafficPolicy: {}
        f:internalTrafficPolicy: {}
        f:ipFamilies: {}
        f:ipFamilyPolicy: {}
        f:ports:
          .: {}
          k:{"port":80,"protocol":"TCP"}:
            .: {}
            f:appProtocol: {}
            f:name: {}
            f:port: {}
            f:protocol: {}
            f:targetPort: {}
          k:{"port":443,"protocol":"TCP"}:
            .: {}
            f:appProtocol: {}
            f:name: {}
            f:port: {}
            f:protocol: {}
        f:selector: {}
        f:sessionAffinity: {}
        f:type: {}
    manager: kubectl-client-side-apply
    operation: Update
    time: "2023-01-24T15:55:44Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        f:ports:
          k:{"port":443,"protocol":"TCP"}:
            f:targetPort: {}
    manager: agent
    operation: Update
    time: "2023-02-20T00:12:12Z"
  name: ingress-nginx-controller
  namespace: ingress-nginx
  resourceVersion: "11172404"
  uid: cd4b69c6-7e8e-4b12-a5e9-16093737979c
spec:
  clusterIP: 10.102.120.71
  clusterIPs:
  - 10.102.120.71
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - appProtocol: http
    name: http
    nodePort: 30255
    port: 80
    protocol: TCP
    targetPort: http
  - appProtocol: https
    name: https
    nodePort: 32673
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

It feels like the requests are not being recognized at the nodeport nginx and from the ingress nginx controller pod logs I see only this message from each request I send. IP address 10.244.3.1 is the internal cluster address of the node where the ingress nginx controller is running and from where I am doing the CURL requests. 2023/02/20 01:06:25 [info] 630#630: *282876682 client 10.244.3.1 closed keepalive connection

This is my first K8s cluster, so it would be something stupid in the config, but Im sitting on this for two weeks straight and I am out of ideas and google search possibilities. Thank you

  • Is the curl request being made from master node / outside k8s network. are you able to reach master node ip using either ping or any other tool. – Nataraj Medayhal Feb 20 '23 at 08:52
  • Hello @NatarajMedayhal . Curl was executed on the node, where the ingress nginx is running with exposed ports via nodeport. When I tried that from other servers the output is the same. Firewall is wide open in between these servers in cluster and I do not see any conflict in Rancher. – filipstrapinadotcz Feb 21 '23 at 11:20

0 Answers0