I have already setup a service in a k3s cluster using:
apiVersion: v1
kind: Service
metadata:
name: myservice
namespace: mynamespace
labels:
app: myapp
spec:
type: LoadBalancer
selector:
app: myapp
ports:
- port: 9012
targetPort: 9011
protocol: TCP
kubectl get svc -n mynamespace
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
minio ClusterIP None <none> 9011/TCP 42m
minio-service LoadBalancer 10.32.178.112 192.168.40.74,192.168.40.88,192.168.40.170 9012:32296/TCP 42m
kubectl describe svc myservice -n mynamespace
Name: myservice
Namespace: mynamespace
Labels: app=myapp
Annotations: <none>
Selector: app=myapp
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.32.178.112
IPs: 10.32.178.112
LoadBalancer Ingress: 192.168.40.74, 192.168.40.88, 192.168.40.170
Port: <unset> 9012/TCP
TargetPort: 9011/TCP
NodePort: <unset> 32296/TCP
Endpoints: 10.42.10.43:9011,10.42.10.44:9011
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I assume from the above that I sould be able to access the minIO console from: http://192.168.40.74:9012 but it is not possible.
Error message:
curl: (7) Failed to connect to 192.168.40.74 port 9012: Connection timed out
Furthemore, If I execute
kubectl get node -o wide -n mynamespace
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
antonis-dell Ready control-plane,master 6d v1.21.2+k3s1 192.168.40.74 <none> Ubuntu 18.04.1 LTS 4.15.0-147-generic containerd://1.4.4-k3s2
knodeb Ready worker 5d23h v1.21.2+k3s1 192.168.40.88 <none> Raspbian GNU/Linux 10 (buster) 5.4.51-v7l+ containerd://1.4.4-k3s2
knodea Ready worker 5d23h v1.21.2+k3s1 192.168.40.170 <none> Raspbian GNU/Linux 10 (buster) 5.10.17-v7l+ containerd://1.4.4-k3s2
As it is shown above the INTERNAL-IPs of nodes are the same as the EXTERNAL-IPs of Load Balancer. Am I doing something wrong here?