0

I have the following setup in a minikube kubernetes cluster (locally)

  1. namespace customer-a
  • 1 deployment -> prints "Hi from Customer A"
  • 1 LoadBalancer type Service
  • 1 ingress -> host customer-a.example.com
  1. namespace customer-b
  • 1 deployment -> prints "Hi from Customer B"
  • 1 LoadBalancer type Service
  • 1 ingress -> host customer-b.example.com
  1. namespace customer-c
  • 1 deployment -> prints "Hi from Customer C"
  • 1 LoadBalancer type Service
  • 1 ingress -> host customer-c.example.com

Since I am running this setup in a minikube cluster, i have to use the minikube tunnel command to access the ingress service

And here's how my current setup looks like

// kubectl get ing, svc -n customer-a

NAME                                   CLASS   HOSTS                      ADDRESS   PORTS   AGE
ingress.networking.k8s.io/customer-a   nginx   customer-a.example.com             80      11s

NAME                 TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
service/customer-a   LoadBalancer   10.96.39.62   127.0.0.1     80:30048/TCP   11s


// kubectl get ing, svc -n customer-b
NAME                                   CLASS   HOSTS                      ADDRESS        PORTS   AGE
ingress.networking.k8s.io/customer-b   nginx   customer-b.example.com   192.168.49.2   80      30s

NAME                 TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/customer-b   LoadBalancer   10.110.126.198   127.0.0.1     80:31292/TCP   30s


// kubectl get ing, svc -n customer-c
NAME                                   CLASS   HOSTS                      ADDRESS        PORTS   AGE
ingress.networking.k8s.io/customer-c   nginx   customer-c.example.com   192.168.49.2   80      6m36s

NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/customer-c   LoadBalancer   10.104.99.195   127.0.0.1     80:32717/TCP   6m36s

according to the above, EXTERNAL-IP of all the LoadBalancer type Services are the same, but to differentiate the traffic flow, I have used the HOSTS as above (customer-a.example.com, customer-b.example.com, customer-c.example.com)

And I have mapped the Ip to the hostnames in the /etc/hosts as below:

127.0.0.1 customer-a.example.com customer-b.example.com customer-c.example.com

When I try to access each URL, it only directs me to the same result, which is Hi from Customer C

// curl -kv http://customer-a.example.com

> GET / HTTP/1.1
> Host: customer-a.example.com
> User-Agent: curl/7.85.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< date: Thu, 29 Dec 2022 00:24:49 GMT
< server: uvicorn
< content-length: 20
< content-type: application/json
<
* Connection #0 to host customer-a.example.com left intact
{"response":"Hi from Customer C"}


// curl -kv http://customer-b.example.com

> GET / HTTP/1.1
> Host: customer-b.example.com
> User-Agent: curl/7.85.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< date: Thu, 29 Dec 2022 00:24:49 GMT
< server: uvicorn
< content-length: 20
< content-type: application/json
<
* Connection #0 to host customer-b.example.com left intact
{"response":"Hi from Customer C"}


// curl -kv http://customer-c.example.com

> GET / HTTP/1.1
> Host: customer-c.example.com
> User-Agent: curl/7.85.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< date: Thu, 29 Dec 2022 00:24:49 GMT
< server: uvicorn
< content-length: 20
< content-type: application/json
<
* Connection #0 to host customer-c.example.com left intact
{"response":"Hi from Customer C"}

Can someone help me find the issue with this? I assume this has something to do with the minikube tunnel ?

Jananath Banuka
  • 151
  • 2
  • 6

1 Answers1

0

This has nothing to do with the minikube tunnel. The issue here is that all your services are using the same port to communicate outside the cluster. In tcp protocol two applications running on the same machine can’t have the same port numbers, so in this case you need to configure custom port numbers for all the three deployments and map them accordingly in your load balancer or ingress or nginx configuration.