0

I'm trying to achieve load balancing of gRPC messages using linkerd on a k8s cluster.

The k8s cluster is setup using microk8s. k8s is version 1.23.3 and linkerd is version stable-2.11.1.

I have a server and a client app, both c# code. The client sends 100 messages over a stream, the server responds with a message. The server sits in the deployment which is replicated 3 times.

Next to the deployment there is a NodePort service so the client can access the server.

Deployment.yaml file

apiVersion: apps/v1
kind: Deployment
metadata:
  name: greeter
  labels:
    app: greeter
spec:
  replicas: 3
  selector:
    matchLabels:
      app: greeter
  template:
    metadata:
      labels:
        app: greeter
    spec:
      containers:
        - name: greeter
          image: grpc-service-image
          imagePullPolicy: "Always"
          ports:
            - containerPort: 80
          resources:
            limits:
              cpu: "0.5"
---
apiVersion: v1
kind: Service
metadata:
  name: greeter
  labels:
    app: greeter
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
    nodePort: 31111
    protocol: TCP
  selector:
    app: greeter

To spin up the server deployment I use the command to make sure to inject linkerd into the deployment: cat deployment.yaml | linkerd inject - | kubectl apply -f -

This setup is able to communicate between the client and service. But communication is always to the same pod.

So my questions:

  • I have read somewhere that the load balancing is done on the client side, is this true? And does this mean that I need to add ingress to make the load balancing work? Or how does load balancing exactly work with linkerd and gRPC?
  • Does the load balancing work with the NodePort setup or is this not necessary?
  • Any suggestion on how to fix this?
NM138
  • 43
  • 5

1 Answers1

0

As a maintainer of gRPC said in Proxy load-balancing with GRPC streaming requests,

Streaming RPCs are stateful and so all messages must go to the same backend.

You could add your own logic on top to do load balancing, since this will not be possible using the gRPC libraries's load balancing features.

  • You could do this in the client. This follows the "thick client" approach of load balancing. Get a list of gRPC services available, set up a connection to each of them, and take it in turns to use each service (round-robin load balancing).

  • Alternatively, you could implement your own proxy load balancer which receives this stream and splits it into multiple streams, and forwards it to multiple services. This would put the control of load balancing on the load balancer, rather than on the client.

I haven't tried either and IMHO this is not a use-case that gRPC supports well.

PS: This is not something that linkerd can take off your shoulders.

Ben Butterworth
  • 22,056
  • 10
  • 114
  • 167