1

I am new to Kubernetes and currently deploy an application on AWS EKS.

I want to configure a Service of my K8s cluster deployed on AWS EKS.

Here is the description of my issue: I did an experiment. I spin up 2 Pods running the same web application and expose them using a service (the Service is using LoadBalancer type). Then I got an external IP of that Service. Then and found that requests that I sent were not distributed evenly to every Pod under the Service I created. To be more precise, I sent 3 requests and all the three requests are processed by the same Pod.

Therefore, I want to configure the load balancer algorithm to be round robin or least_connection to resolve this issue.

I asked a similar question before and I am suggested to try the IPVS mode of the kube-proxy, but I did not get the detailed instruction on how to apply that config, and did not find any useful material online. If the IPVS mode is a feasible solution to this issue, please provide some detailed instructions.

Thanks!

Xiao Ma
  • 95
  • 2
  • 7
  • What are you using to perform the requests? – Kamol Hasan Nov 26 '20 at 08:56
  • Basically the application is about the audio to text conversion. I bring up the service and then use a python script I perform a request by querying the url provided by the service's External IP. I am using web socket – Xiao Ma Nov 26 '20 at 09:03

2 Answers2

1

Your expectation from a load balancer is correct, it should distribute the incoming requests. But since you are using a web-socket to perform requests, requests are being handled by the same pod.

A web-socket uses a persistent connection between a client and a server that means the connection is re-used rather than establishing a new connection (costly!). So, you're not getting the load balancing you wanted.

Use something that uses non-persistent connections to check the load balancing feature:

$ curl -H "Connection: close" http://address:port/
Kamol Hasan
  • 12,218
  • 1
  • 37
  • 46
  • Hi, thanks for the suggestion. I checked the behavior using the non-persistent connnection. And it is verified that the traffic distribution is actually not `round-robin`. Any method that I can configure the service to do this? – Xiao Ma Nov 26 '20 at 14:22
1

Had the exact issues and while using the -H "Connection: close" tag when testing externally load balanced connections helps, I still wanted inter-service communication to benefit from having IPVS with rr or sed.

To summarize, you will need to setup the following dependencies to the nodes. I would suggest adding these to your cloud config.

#!/bin/bash
sudo yum install -y ipvsadm
sudo ipvsadm -lsudo modprobe ip_vs 
sudo modprobe ip_vs_rr
sudo modprobe ip_vs_wrr 
sudo modprobe ip_vs_sh
sudo modprobe nf_conntrack_ipv4

Once that is done will need to edit your kube-proxy-config in kube-system namespace to have mode: ipvs and scheduler: <desired lb algo>.

Lastly, you will need to update the container commands for the kube-proxy daemonset with the appropriate proxy-mode flag --proxy-mode=ipvs and --ipvs-scheduler=<desired lb algo>.

Following are the available lb algos for IPVS:

rr: round-robin
lc: least connection
dh: destination hashing
sh: source hashing
sed: shortest expected delay
nq: never queue

Source: https://medium.com/@selfieblue/how-to-enable-ipvs-mode-on-aws-eks-7159ec676965

sesl
  • 11
  • 1