1

I set up a trivial kubernetes yaml file (below) to test the nginx ingress. Nginx works as expected inside the cluster but isn't visible outside the cluster.

I'm running minikube with minikube tunnel and minikube addons enable ingress. When I kubectl exec into the nginx-controller I can see nginx working and serving up the test page, but when I try to hit it from outside I get Failed to connect to 127.0.0.1 port 80: Connection refused.

Save the following yaml as stackoverflow.yaml

kind: Deployment
apiVersion: apps/v1
metadata:
  name: cheese-app
  labels:
    app: cheese-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cheese-app
  template:
    metadata:
      labels:
        app: cheese-app
    spec:
      containers:
      - name: cheese-container
        image: errm/cheese:stilton
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: cheese-svc
spec:
  selector:
    app: cheese-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: cheese-ingress
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: cheese-svc
          servicePort: 80

Then initialize minikube

minikube start
minikube addons enable ingress
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-system ingress-nginx/ingress-nginx
kubectl wait --for=condition=ready pod --all --timeout=120s
kubectl get pods

Start a minikube tunnel in another terminal window

minikube tunnel

And apply the yaml file

kubectl apply -f ./stackoverflow.yaml
kubectl wait --for=condition=ready pod --all --timeout=120s
kubectl get pods
kubectl get svc

For reference, my pods and svc are

NAME                                                       READY   STATUS    RESTARTS   AGE
cheese-app-74ddc9f7c6-xpjwx                                1/1     Running   0          89m
ingress-system-ingress-nginx-controller-656bf75d85-fkzzp   1/1     Running   0          90m

cheese-svc                                          ClusterIP      10.104.243.39   <none>        80/TCP                       82m
ingress-system-ingress-nginx-controller             LoadBalancer   10.106.203.73   127.0.0.1     80:30635/TCP,443:32594/TCP   83m
ingress-system-ingress-nginx-controller-admission   ClusterIP      10.101.103.74   <none>        443/TCP                      83m
kubernetes                                          ClusterIP      10.96.0.1       <none>        443/TCP                      84m

At this point curl 127.0.0.1/ should theoretically return a sample web page, but instead it reports connection refused.

As a diagnostic step, I tried using kubectl exec to try to curl the page from the nginx server from inside the cluster. That works as long as I curl nginx using its own 127.0.0.1 endpoint. If I curl it using its CLUSTER-IP (10.106.203.73 in this cluster), I get nothing.

kubectl exec --stdin --tty ingress-system-ingress-nginx-controller-656bf75d85-fkzzp -- curl 127.0.0.1/ -i
...works...

kubectl exec --stdin --tty ingress-system-ingress-nginx-controller-656bf75d85-fkzzp -- curl 10.106.203.73/ -i
...nothing...

curl 127.0.0.1/
...nothing...

I haven't modified the /etc/nginx/nginx.conf in any way, it's the default config auto generated by setting up the kubernetes ingress.

Don Alvarez
  • 2,017
  • 2
  • 12
  • 12
  • Can you run the `kubectl get ing cheese-ingress` command and check the `ADDRESS` ? Which minikube [driver](https://minikube.sigs.k8s.io/docs/drivers/) are you using ? – matt_j Mar 30 '21 at 12:51
  • Hi @matt_j - I'm running minikube v1.18.1 on Win 10 with the docker driver. `kubectl get ing cheese-ingress` reports CLASS none, HOSTS *, ADDRESS 127.0.0.1, PORTS 80. I've also done recent minikube delete and docker killing all containers and restarting just to make sure it's a clean environment. – Don Alvarez Mar 30 '21 at 15:29
  • I'm not sure exactly how the `docker` driver works on Windows, but it seems `127.0.0.1` address may cause problems (look at: [minikube ip returns 127.0.0.1](https://github.com/kubernetes/minikube/issues/7344)). Can you use a different driver (e.g. virtualbox) ? I've tested your scenario with minikube `v1.18.1` on Win 10 with the `virtualbox` driver and it works as expected. – matt_j Apr 01 '21 at 10:17
  • Thanks @matt_j - I finally just spun up a tiny Azure Kubernetes cluster so everything would just work after burning too much time here on some kind of minikube bug that blocks ingress and too much time before that on a bug in the Docker for Windows Kubernetes install that blocks local persistent storage. Enough people on the web are reporting each of these issues that it's time for me to say these local test environments simply aren't worth the time it takes to debug them. I do very much appreciate your help here. – Don Alvarez Apr 02 '21 at 12:54

3 Answers3

1

From within cluster this link should work - http://.:port in your case it will be - http://cheese-svc.default:80

To access it from outside, the service is accessible on nodePort 30635 http://10.106.203.73:30635

subudear
  • 231
  • 1
  • 4
  • Hi @subudear - I'm using an ingress so there is no nodePort specified in the service definition (on purpose). When I follow your suggestion and curl 10.106.203.73:30635 from outside I get Failed to connect to 10.106.203.73 port 30635: Timed out. Do you get something different? I can access nginx from inside the cluster via cheese-svc.default as you suggest but the problem is I can't access the site from outside the cluster. – Don Alvarez Mar 29 '21 at 15:07
1

As you are using minikube, get the IP of your one node minikube cluster using minikube ip.

And then curl http://<minikube_ip>:<nodePort>

rock'n rolla
  • 1,883
  • 1
  • 13
  • 19
  • Hi @rock'n rolla - there is no nodePort for the service because it uses an ingress – Don Alvarez Mar 29 '21 at 22:48
  • I ain't referring to your `cheese-svc` service. I'm referring to the nodePort of the nginx ingress controller service. On a minikube cluster, you're not going to get an external ip out of a nginx ingress controller LoadBalancer type service. You gotta use the `minikube ip` command to get the IP and curl it using the nodePort (30635 in this case) of the nginx ingress service. It will show up a cheese pic, even while accessing outside the cluster. – rock'n rolla Mar 29 '21 at 23:22
  • Thanks for the clarification @rock'n rolla. I get a timeout when curling the minikube IP on the ingress port. With this example do you get something different? My minikube IP is 192.168.49.2, the ingress loadBalancer port listed above is 30635. When I curl 192.168.49.2:30635 I get nothing (just a timeout). Does it work for you? – Don Alvarez Mar 30 '21 at 11:42
  • Yes, I get the same cheese pic with minikube ip, which I get when I run the command which worked for you: `kubectl exec --stdin --tty ingress-system-ingress-nginx-controller-******* -- curl 127.0.0.1/ -i` – rock'n rolla Mar 30 '21 at 12:09
  • Can I ask what OS you're running @rock'n rolla? I'm seeing these issues while running on Win 10. – Don Alvarez Mar 30 '21 at 15:35
  • I'm on a macOS ¯\_(ツ)_/¯ – rock'n rolla Mar 30 '21 at 15:46
0

My solution was to conclude that minikube isn't worth the effort. I burned a couple pennies spinning up a tiny Azure Kubernetes cluster for a couple minutes and everything just worked instantly.

I had assumed running locally on minikube or in the Kubernetes cluster that Docker for Windows installs would be quicker and easier than running in a cloud instance, but I was wrong. The number of small weird annoying blockers with these local test environments is just too high. Your mileage may vary but I'm definitely willing to pay a few cents to test my builds if it saves me literally days of unsuccessful debugging of local dev environments.

Don Alvarez
  • 2,017
  • 2
  • 12
  • 12