0

I was looking for a load-balancing technique with health checks while making my worker-nodes communicating with the API server.

Kubernetes itself has a service called "kubernetes" whose endpoints are the API servers.

I entered the domain of this service in kubeconfig of workernodes and it is behaving well.

The only concern is there are no health checks of the API server, if any of them falls back, the service will still forward the traffic to the node.

Can I configure some health check here??

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2017-09-06T07:54:44Z
  labels:
    component: apiserver
    provider: kubernetes
  name: kubernetes
  namespace: default
  resourceVersion: "96"
  selfLink: /api/v1/namespaces/default/services/kubernetes
  uid: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
spec:
  clusterIP: 10.32.0.1
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: 6443
  sessionAffinity: ClientIP
  sessionAffinityConfig:
    clientIP:
      timeoutSeconds: 10800
  type: ClusterIP
status:
  loadBalancer: {}

I know I can use LB like Haproxy, and cloud providers LB but I want to achieve that inside cluster only

2 Answers2

1

It's magic ✨. The endpoints of the service are managed directly by the apiservers themselves. That's why it has no selector. The Service is really only there for compat with cluster DNS. It is indeed what you use to talk to the API from inside the cluster, this is generally detected automatically by most client libraries.

coderanger
  • 52,400
  • 4
  • 52
  • 75
  • The [magic](https://github.com/kubernetes/kubernetes/blob/fb87f72b882abc28c7331c3d089d0c096f31dd26/pkg/master/reconcilers/mastercount.go#L62-L137) :) – Matt Mar 19 '20 at 01:42
  • I don't understand your question, yes that is how it works. When the API servers next reconcile themselves, the missing IP will be removed from the endpoints. Matt linked you the code above. – coderanger Mar 19 '20 at 21:26
0

If you want to connect to Kubernetes API Server from within the Kubernetes cluster itself i.e from a pod then you can use the kubernetes service(via port 443) created by default and available in all namespaces and you should not configure external load balancer for that and try to connect to it via that load balancer because then you are routing traffic from the cluster to the load balancer outside the cluster just to again come inside the cluster and reach Kubernetes API Server.

You should configure external load balancer for Kubernetes API Server(port 6443) and use that to connect to Kubernetes API Server when you want to connect to it from outside the cluster i.e via a kubectl using kubeconfig file.

Arghya Sadhu
  • 41,002
  • 9
  • 78
  • 107