2

I have already setup a service in a k3s cluster using:

apiVersion: v1
kind: Service
metadata:
  name: myservice
  namespace: mynamespace
  labels:
    app: myapp
spec:
  type: LoadBalancer
  selector:
    app: myapp
  ports:
  - port: 9012 
    targetPort: 9011 
    protocol: TCP

kubectl get svc -n mynamespace

NAME            TYPE           CLUSTER-IP      EXTERNAL-IP                                PORT(S)          AGE
minio           ClusterIP      None            <none>                                     9011/TCP         42m
minio-service   LoadBalancer   10.32.178.112   192.168.40.74,192.168.40.88,192.168.40.170   9012:32296/TCP   42m

kubectl describe svc myservice -n mynamespace

Name:                     myservice
Namespace:                mynamespace
Labels:                   app=myapp
Annotations:              <none>
Selector:                 app=myapp
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.32.178.112
IPs:                      10.32.178.112
LoadBalancer Ingress:     192.168.40.74, 192.168.40.88, 192.168.40.170
Port:                     <unset>  9012/TCP
TargetPort:               9011/TCP
NodePort:                 <unset>  32296/TCP
Endpoints:                10.42.10.43:9011,10.42.10.44:9011
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

I assume from the above that I sould be able to access the minIO console from: http://192.168.40.74:9012 but it is not possible.

Error message:

curl: (7) Failed to connect to 192.168.40.74 port 9012: Connection timed out

Furthemore, If I execute

kubectl get node -o wide -n mynamespace

NAME           STATUS   ROLES                  AGE     VERSION        INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION       CONTAINER-RUNTIME
antonis-dell   Ready    control-plane,master   6d      v1.21.2+k3s1   192.168.40.74    <none>        Ubuntu 18.04.1 LTS               4.15.0-147-generic   containerd://1.4.4-k3s2
knodeb         Ready    worker                 5d23h   v1.21.2+k3s1   192.168.40.88   <none>        Raspbian GNU/Linux 10 (buster)   5.4.51-v7l+          containerd://1.4.4-k3s2
knodea         Ready    worker                 5d23h   v1.21.2+k3s1   192.168.40.170   <none>        Raspbian GNU/Linux 10 (buster)   5.10.17-v7l+         containerd://1.4.4-k3s2

As it is shown above the INTERNAL-IPs of nodes are the same as the EXTERNAL-IPs of Load Balancer. Am I doing something wrong here?

e7lT2P
  • 1,635
  • 5
  • 31
  • 57
  • Can you try to access it with the Port "32296" (http://192.168.40.74:32296) ? – CLNRMN Jul 14 '21 at 13:10
  • Yes, with no luck. – e7lT2P Jul 14 '21 at 13:13
  • Is this a tutorial you're following? If so, please share a link to other could reproduce the exact same cluster and order. As for the last question, that looks absolutely normal considering [how loadbalancer on k3s works](https://rancher.com/docs/k3s/latest/en/networking/#service-load-balancer) – moonkotte Jul 15 '21 at 11:31
  • No I am not following a tutorial. I have already show the yaml files. Can you explain the last one? I did not understand this. – e7lT2P Jul 15 '21 at 11:43
  • I'll explain it later. Can you try `curl -vL 192.168.40.74:9012` ? `-v` stands for verbose and `-L` will follow any redirects if there are any. – moonkotte Jul 15 '21 at 12:27
  • I get Connection timed out. – e7lT2P Jul 15 '21 at 12:36
  • @e7lT2P okay, thank you. I'm posting the answer about how it should work. There's something wrong with your `minio` deployment. You can try first on `nginx` and see if this works. – moonkotte Jul 15 '21 at 12:46
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/234903/discussion-between-moonkotte-and-e7lt2p). – moonkotte Jul 15 '21 at 12:51

1 Answers1

6

K3S cluster initial configuration

To reproduce the environment I created a two node k3s cluster following next steps:

  1. Install k3s control-plane on required host:

    curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='--write-kubeconfig-mode=644' sh -
    
  2. Verify it works:

    k8s kubectl get nodes -o wide
    
  3. To add a worker node, this command should be run on a worker node:

    curl -sfL https://get.k3s.io | K3S_URL=https://control-plane:6443 K3S_TOKEN=mynodetoken sh -
    

Where K3S_URL is a control-plane URL (with IP or DNS)

K3S_TOKEN can be got by:

sudo cat /var/lib/rancher/k3s/server/node-token

You should have a running cluster:

$ k3s kubectl get nodes -o wide
NAME           STATUS   ROLES                  AGE   VERSION        INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME
k3s-cluster    Ready    control-plane,master   27m   v1.21.2+k3s1   10.186.0.17   <none>        Ubuntu 18.04.5 LTS   5.4.0-1046-gcp   containerd://1.4.4-k3s2
k3s-worker-1   Ready    <none>                 18m   v1.21.2+k3s1   10.186.0.18   <none>        Ubuntu 18.04.5 LTS   5.4.0-1046-gcp   containerd://1.4.4-k3s2

Reproduction and testing

I created a simple deployment based on nginx image by:

$ k3s kubectl create deploy nginx --image=nginx

And exposed it:

$ k3s kubectl expose deploy nginx --type=LoadBalancer --port=8080 --target-port=80

This means that nginx container in pod is listening to port 80 and service is accessible on port 8080 within the cluster:

$ k3s kubectl get svc -o wide
NAME         TYPE           CLUSTER-IP    EXTERNAL-IP               PORT(S)          AGE   SELECTOR
kubernetes   ClusterIP      10.43.0.1     <none>                    443/TCP          29m   <none>
nginx        LoadBalancer   10.43.169.6   10.186.0.17,10.186.0.18   8080:31762/TCP   25m   app=nginx

Service is accessible on both IPs or localhost AND port 8080 or NodePort as well.

+ taking into account the error you get curl: (7) Failed to connect to 192.168.40.74 port 9012: Connection timed out means that service is configured, but it doesn't respond properly (it's not 404 from ingress or connection refused).

Answer on second question - Loadbalancer

From rancher k3s official documentation about LoadBalancer, Klipper Load Balancer is used. From their github repo:

This is the runtime image for the integrated service load balancer in klipper. This works by using a host port for each service load balancer and setting up iptables to forward the request to the cluster IP.

From how the service loadbalancer works:

K3s creates a controller that creates a Pod for the service load balancer, which is a Kubernetes object of kind Service.

For each service load balancer, a DaemonSet is created. The DaemonSet creates a pod with the svc prefix on each node.

The Service LB controller listens for other Kubernetes Services. After it finds a Service, it creates a proxy Pod for the service using a DaemonSet on all of the nodes. This Pod becomes a proxy to the other Service, so that for example, requests coming to port 8000 on a node could be routed to your workload on port 8888.

If the Service LB runs on a node that has an external IP, it uses the external IP.

In other words yes, this is expected that loadbalancer has the same IP addresses as hosts' internal-IPs. Every service with loadbalancer type in k3s cluster will have its own daemonSet on each node to serve direct traffic to the initial service.

E.g. I created the second deployment nginx-two and exposed it on port 8090, you can see that there are two pods from two different deployments AND four pods which act as a loadbalancer (please pay attention to names - svclb at the beginning):

$ k3s kubectl get pods -o wide
NAME                         READY   STATUS    RESTARTS   AGE   IP           NODE           NOMINATED NODE   READINESS GATES
nginx-6799fc88d8-7m4v4       1/1     Running   0          47m   10.42.0.9    k3s-cluster    <none>           <none>
svclb-nginx-jc4rz            1/1     Running   0          45m   10.42.0.10   k3s-cluster    <none>           <none>
svclb-nginx-qqmvk            1/1     Running   0          39m   10.42.1.3    k3s-worker-1   <none>           <none>
nginx-two-6fb6885597-8bv2w   1/1     Running   0          38s   10.42.1.4    k3s-worker-1   <none>           <none>
svclb-nginx-two-rm594        1/1     Running   0          2s    10.42.0.11   k3s-cluster    <none>           <none>
svclb-nginx-two-hbdc7        1/1     Running   0          2s    10.42.1.5    k3s-worker-1   <none>           <none>

Both services have the same EXTERNAL-IPs:

$ k3s kubectl get svc
NAME         TYPE           CLUSTER-IP     EXTERNAL-IP               PORT(S)          AGE
nginx        LoadBalancer   10.43.169.6    10.186.0.17,10.186.0.18   8080:31762/TCP   50m
nginx-two    LoadBalancer   10.43.118.82   10.186.0.17,10.186.0.18   8090:31780/TCP   4m44s
moonkotte
  • 3,661
  • 2
  • 10
  • 25