-1

I need to be able to split traffict to two backend pods in a cluster, based on this schematic for a university project:

Traffict from clients going to two different pods

So Far, I have accomplished this diagram and progress: Pods with their containers and services exposing them internally in the cluster

All other parts of the project work fine. The part I'm having difficulty understanding is how to split traffic evenly between de gRPC-Client (a backend in node) and Redis Pub (a backend in Golang). I tried applying some configurations with ingress, gateways, but nothing has worked. I'm pretty sure it has to be with a load balancer but I cannot get it to work. I also read something about nginx service but I couldn't work that one either.

The behaivor should be this:

1)Front end consumes API at http://some-ip:3000/ (the ip is not necessary to be static, and port 3000 is not strictly 3000, it can be any port)

2)Traffic from incoming clients consuming the API gets split in half between the gRPC-client backend and the Redis-pub backend

This is my configuration for both backend pods:

gRPC Backend

#gRPC
apiVersion: apps/v1
kind: Deployment
metadata:
  name: grpc-deployment
  labels:
    app: grpc
  namespace: so1
spec:
  selector:
    matchLabels:
      app: grpc
  replicas: 1
  template:
    metadata:
      labels:
        app: grpc
    spec:
      containers:
        - name: grpc-server
          image: 'gcr.io/so1-proyecto-383722/grpc_server:latest'
          ports:
            - containerPort: 50051
        - name: grpc-client
          image: 'gcr.io/so1-proyecto-383722/grpc_client:latest'
          ports:
            - containerPort: 50061
---
#gRPC Server Service
apiVersion: v1
kind: Service
metadata:
  name: grpc-service
  labels:
    app: grpc
  namespace: so1

spec:
  selector:
    app: grpc
  type: ClusterIP
  ports:
    - name: grpc
      port: 50051
      targetPort: 50051

---
#gRPC Client Service (For Traffict Split)
apiVersion: v1
kind: Service
metadata:
  name: input-grpc-service
  labels:
    app: input-service
  namespace: so1

spec:
  selector:
    app: grpc
  type: ClusterIP
  ports:
    - name: grpc
      port: 3000  #3000
      targetPort: 50061

Redis Backend

#Redis Pub Sub
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-pub-sub-deployment
  labels:
    app: redis-pub-sub
  namespace: so1
spec:
  selector:
    matchLabels:
      app: redis-pub-sub
  replicas: 1
  template:
    metadata:
      labels:
        app: redis-pub-sub
    spec:
      containers:
        - name: redis-pub
          image: 'gcr.io/so1-proyecto-383722/redispub:latest'
          ports:
            - containerPort: 11000
        - name: redis-sub
          image: 'gcr.io/so1-proyecto-383722/redissub:latest'
          

---
#Redis Pub Service (For Traffict Split)
apiVersion: v1
kind: Service
metadata:
  name: input-redis-service
  labels:
    app: input-service
  namespace: so1

spec:
  selector:
    app: redis-pub-sub
  type: ClusterIP
  ports:
    - name: redis-pub-sub
      port: 3000 #3000
      targetPort: 11000

Could anyone help me follow the right direction or give me an example that can work for this? Appreciate any help.

1 Answers1

0

Expose the gRPC client and Redis Pub/Sub service in a single container and define a common Kubernetes service for both. Then, change the service type to LoadBalancer. This will create a LoadBalancer IP and expose both services in a common endpoint in http ,the ports will be different(work fine with http not for https). This is an easy hack, but it won't enable HTTPS, and the GKE health check to HTTP/2 fail . The other approach you can follow is to use an Envoy ingress proxy, where you can define the Redis Pub/Sub and gRPC client as separate services with different paths ip:/pub/ serve req to redis and ip:/grpc to grpc. example files are here https://github.com/GoogleCloudPlatform/grpc-gke-nlb-tutorial/tree/main/envoy/k8s

clusters:
      - name: grpc
        connect_timeout: 0.5s
        type: STRICT_DNS
        lb_policy: ROUND_ROBIN
        http2_protocol_options: {}
        load_assignment:
          cluster_name: grpc
          endpoints:
          - lb_endpoints:
            - endpoint:
                address:
                  socket_address:
                    address: service-name.default.svc.cluster.local
                    port_value: grpc-port
        health_checks:
          timeout: 1s
          interval: 10s
          unhealthy_threshold: 2
          healthy_threshold: 2
          grpc_health_check: {}
      - name: redis
        connect_timeout: 0.5s
        type: STRICT_DNS
        lb_policy: ROUND_ROBIN
        http_protocol_options: {}
        load_assignment:
          cluster_name: redis
          endpoints:
          - lb_endpoints:
            - endpoint:
                address:
                  socket_address:
                    address: redis.default.svc.cluster.local
                    port_value: redis-port