I'm trying to learn working with kubernetes. I have project which use websockets and im trying to apply sticki session for that purpose while working wiht multiple pods.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: lct-api-deployment
spec:
replicas: 3
selector:
matchLabels:
app: lct-api
template:
metadata:
labels:
app: lct-api
spec:
containers:
- name: lct-api
image: localhost:7000/lct:latest
imagePullPolicy: Always
resources:
requests:
memory: "200Mi"
cpu: "200m"
limits:
memory: "300Mi"
cpu: "350m"
serviice.yaml
apiVersion: v1
kind: Service
metadata:
name: lct-api-service
annotations:
service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-type: "cookies"
service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-name: "example"
service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-ttl: "60"
spec:
selector:
app: lct-api
type: LoadBalancer
sessionAffinity: ClientIP
externalTrafficPolicy: Local
ports:
- protocol: TCP
port: 6008
targetPort: 80
And I have no idea why its not working. On the client side i'm using signalr with React app. The problem occurs when negatiote request does not land in the same Pod so ws connection cannot be established.
My question is: Is there any way to configure k8s Load Balancer to work with sticky sessions?
EDIT
After Kiran Kotturi comment my yaml files looks like this:
apiVersion: v1
kind: Service
metadata:
name: lct-api-service
annotations:
service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-type: "cookies"
service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-name: "example"
service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-ttl: "60"
spec:
selector:
app: lct-api
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- protocol: TCP
port: 6008
targetPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: lct-api
spec:
replicas: 5
selector:
matchLabels:
app: lct-api
template:
metadata:
labels:
app: lct-api
spec:
containers:
- name: lct
image: localhost:7000/lct:latest
imagePullPolicy: Always
resources:
requests:
memory: "200Mi"
cpu: "200m"
limits:
memory: "300Mi"
cpu: "350m"
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: lct
topologyKey: kubernetes.io/hostname
And still doesnt work. WS connection is established from time to time but it's just luck because it land in the same pods by accicdent.