0

I'm trying to learn working with kubernetes. I have project which use websockets and im trying to apply sticki session for that purpose while working wiht multiple pods.

deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: lct-api-deployment
spec:
  replicas: 3
  selector: 
    matchLabels:
      app: lct-api
  template:
    metadata:
      labels:
        app: lct-api
    spec:
      containers:
        - name: lct-api
          image: localhost:7000/lct:latest
          imagePullPolicy: Always
          resources:
            requests:
                memory: "200Mi"
                cpu: "200m"
            limits:
                memory: "300Mi"
                cpu: "350m"

serviice.yaml

apiVersion: v1
kind: Service
metadata:
  name: lct-api-service
  annotations:
      service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
      service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-type: "cookies"
      service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-name: "example"
      service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-ttl: "60"
spec:
  selector:
    app: lct-api
  type: LoadBalancer
  sessionAffinity: ClientIP
  externalTrafficPolicy: Local
  ports:
    - protocol: TCP
      port: 6008
      targetPort: 80

And I have no idea why its not working. On the client side i'm using signalr with React app. The problem occurs when negatiote request does not land in the same Pod so ws connection cannot be established.

My question is: Is there any way to configure k8s Load Balancer to work with sticky sessions?

EDIT

After Kiran Kotturi comment my yaml files looks like this:

apiVersion: v1
kind: Service
metadata:
  name: lct-api-service
  annotations:
      service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
      service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-type: "cookies"
      service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-name: "example"
      service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-ttl: "60"
spec:
  selector:
    app: lct-api
  type: LoadBalancer
  externalTrafficPolicy: Local
  ports:
    - protocol: TCP
      port: 6008
      targetPort: 80


apiVersion: apps/v1
kind: Deployment
metadata:
  name: lct-api
spec:
  replicas: 5
  selector: 
    matchLabels:
      app: lct-api
  template:
    metadata:
      labels:
        app: lct-api
    spec:
      containers:
        - name: lct
          image: localhost:7000/lct:latest
          imagePullPolicy: Always
          resources:
            requests:
                memory: "200Mi"
                cpu: "200m"
            limits:
                memory: "300Mi"
                cpu: "350m"
      affinity:
        podAntiAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
                - labelSelector:
                    matchLabels:
                        app: lct
                  topologyKey: kubernetes.io/hostname

And still doesnt work. WS connection is established from time to time but it's just luck because it land in the same pods by accicdent.

kenik
  • 142
  • 3
  • 13
  • Can you check the below mentioned github link and also a reference link related to sticky session which will be useful. 1) https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/examples/http-with-sticky-sessions.yml 2) https://docs.digitalocean.com/glossary/sticky-session/ – Kiran Kotturi Jul 26 '23 at 09:37
  • should I add that ngnix layer? So there should be LoadBalance -> nginx -> my app? The reason why I'm asking because when i configured my like nging deployment it doesnt work. Should I configure it in my server app – kenik Jul 27 '23 at 06:37
  • As per the above provided documents,Sticky sessions will route consistently to the same nodes, not pods, so you should avoid having more than one pod per node serving requests. Nginx is taken as example in the github code.You can use your own app as per your requirement.If possible,update the port from 6008 to 80 in service.yaml file and also add containerPort: 80 and protocol: TCP in deployment.yaml – Kiran Kotturi Jul 30 '23 at 05:04
  • Based upon the above comments, I have posted this as an answer for greater visibility for the community. – Kiran Kotturi Jul 30 '23 at 16:20

1 Answers1

0

As per the documentation, Sticky sessions will route consistently to the same nodes, not pods, so you should avoid having more than one pod per node serving requests.

If user sessions depend on the client always connecting to the same backend, you can send a cookie to the client to enable sticky sessions as mentioned below in the annotations field.

service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-type: "cookies

Sticky sessions send subsequent requests from the same client to the same Droplet by setting a cookie with a configurable name and TTL (Time-To-Live) duration. The TTL parameter defines the duration the cookie remains valid in the client’s browser. This option is useful for application sessions that rely on connecting to the same Droplet for each request.

Sticky sessions do not work with SSL passthrough (port 443 to 443). However, they do work with SSL termination (port 443 to 80) and HTTP requests (port 80 to 80).

If possible,update the port number to 80 instead of 6008 in service.yaml file and also add containerPort: 80 and protocol: TCP in deployment.yaml file.

You can use the github link for reference and make necessary changes to the service and deployment yaml files accordingly.