1

*Cross-posted from k3d GitHub Discussion: https://github.com/rancher/k3d/discussions/690

I am attempting to expose two services over two ports. As an alternative, I'd also love to know how to expose them over the same port and use different routes. I've attempted a few articles and a lot of configurations. Let me know where I'm going wrong with the networking of k3d + k3s / kubernetes + traefik (+ klipper?)...

I posted an example: https://github.com/ericis/k3d-networking

The goal:

  • Reach "app-1" on host over port 8080
  • Reach "app-2" on host over port 8091

Steps

*See: files in repo

  1. Configure k3d cluster and expose app ports to load balancer

    ports:
      # map localhost to loadbalancer
      - port: 8080:80
        nodeFilters:
          - loadbalancer
      # map localhost to loadbalancer
      - port: 8091:80
        nodeFilters:
          - loadbalancer
    
  2. Deploy apps with "deployment.yaml" in Kubernetes and expose container ports

    ports:
      - containerPort: 80
    
  3. Expose services within kubernetes. Here, I've tried two methods.

    • Using CLI

      $ kubectl create service clusterip app-1 --tcp=8080:80
      $ kubectl create service clusterip app-2 --tcp=8091:80
      
    • Using "service.yaml"

      spec:
        ports:
        - protocol: TCP
          # expose internally
          port: 8080
          # map to app
          targetPort: 80
        selector:
          run: app-1
      
  4. Expose the services outside of kubernetes using "ingress.yaml"

    backend:
      service:
        name: app-1
        port:
          # expose from kubernetes
          number: 8080
    
Eric Swanson
  • 820
  • 1
  • 9
  • 19
  • I've read many posts about how to expose two ports with Kubernetes, like https://stackoverflow.com/questions/45621474/how-to-create-kubernetes-service-with-kubectl-which-exposes-two-ports. However, I get confused about how `k3d` is managing ports when the cluster is created and what exactly it does with kubernetes (k3s), klipper, and traefik. – Eric Swanson Jul 27 '21 at 15:35
  • your all files looking perfect however question using Loadbalancer work in K3D ? We only get the external IP while using the Cloud providers with local system like minikube and K3d we don't get the LB IP. also installed any ingress controller ? – Harsh Manvar Jul 27 '21 at 20:02
  • Yes, k3d automates k3s, which comes with Traefik ingress controller and I believe the klipper load balancer – Eric Swanson Jul 27 '21 at 21:22
  • What is the actual issue here? If you want to expose services on different ports, you can use `type: NodePort` for your services. Your services will expose pods' ports to a free port on the node. And it will work. As for ingress, you have two rules, both point to `/` and it won't work this way. Please get familiar with [my answer about ingress and simple examples](https://stackoverflow.com/questions/68449554/ingress-rule-using-host/68460360#68460360). Also fresh k3s has `traefik v2` which uses its own `api`. – moonkotte Jul 28 '21 at 11:50

1 Answers1

3

You either have to use an ingress, or have to open ports on each individual node (k3d runs on docker, so you have to expose the docker ports)

Without opening a port during the creation of the k3d cluster, a nodeport service will not expose your app

k3d cluster create mycluster -p 8080:30080@agent[0]

For example, this would open an "outside" port (on your localhost) 8080 and map it to 30080 on the node - then you can use a NodePort service to actually connect the traffic from that port to your app:

apiVersion: v1
kind: Service
metadata:
 name: some-service
spec:
 ports:
 - protocol: TCP
   port: 80
   targetPort: some-port
   nodePort: 30080
 selector:
   app: pgadmin
 type: NodePort

You can also open ports on the server node like: k3d cluster create mycluster -p 8080:30080@server[0]

Your apps can get scheduled to run on whatever node, and if you force a pod to appear on a specific node (lets say you open a certain port on agent[0] and set up your .yaml files to work with that certain port), for some reason the local-path rancher storage-class just breaks and will not create a persistent volume for your claim. You kinda have to get lucky & have your pod get scheduled where you need it to. (if you find a way to schedule pods on specific nodes without the storage provisioner breaking, let me know)

You also can map a whole range of ports, like: k3d cluster create mycluster --servers 1 --agents 1 -p "30000-30100:30000-30100@server[0]" but be careful with the amount of ports you open, if you open too much, k3d will crash.

Using a load balancer - it's similar, you just have to open one port & map to to the load balancer.

k3d cluster create my-cluster --port 8080:80@loadbalancer

You then have to use an ingress, (or the traffic won't reach)

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: hello
  annotations:
    ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: hello
            port:
              number: 80

I also think that ingress will only route http & https traffic, https should be done on the port 443, supposedly you can map both port 80 and port 443, but I haven't been able to get that to work (I think that certificates need to be set up as well).

  • I've had "0.0.0.0:8080->50001/tcp" for the loadbalancer. I've also created an ingress with port "number: 50001" to route to my k8s service at 50001. However, I can't acess it from host using 8080. It says "curl: (52) Empty reply from server". What am I missing? – emeraldhieu Apr 17 '23 at 18:59