1

I have a problem with my load balancer setup where it always redirect most traffic (like 99%) to one pod. Basically the infrastructure is as shown on this diagram. The objective is I need sticky session to be enabled, whether on nginx or Google load balancer, and my traffic is distributed equally to available pods.

Briefly, I have 2 RCs and 2 Services in my cluster. 1 pod of nginx served behind a Google Loadbalancer (nginx-lb) and another load balancer (app-lb) to balance traffic to 2 app pods. Here's what I thought of the config:

  • nginx-lb: I set the nginx-lb to sessionAffinity: None and externalTrafficPolicy: Local because I am thinking I don't need sticky session now, but I do need to pass through user's IP. At this point all incoming traffic will be treated the same but we are trying to preserve user's IP by setting externalTrafficPolicy: Local.

  • nginx: The nginx itself has enabled ngx_http_realip_module to keep user's IP forwarded but I did not use ip_hash here as I am still thinking we don't need sticky session here yet. Again, just like nginx-lb I am trying to pass all incoming traffic but preserve user's IP. The nginx here is mainly for proxy and SSL handler.

  • app-lb: Then comes to app-lb where I enabled sessionAffinity: ClientIP for sticky session and externalTrafficPolicy: Cluster for load balancing. I believe this is where the actual load balancing by ClientIP happen as this is the only service that has/know 2 pods behind it.

I tested this configuration with ~50ish users running for a day but still redirecting to one pod, while the other pod is idle with low cpu and memory usage compared to the first one.

I'd like to ask with the setup, am I getting right with what I want to achieve? Is there a configuration I am missing? Any input will be highly appreciated.

PS. I re-write the whole question to add more facts from what I have understood, but basically still relevant to original question with different wordings.

spondbob
  • 123
  • 1
  • 5
  • Can you provide a better explanation regarding what you want to achieve? Do you want sticky session for the nginx loadbalancer or not? Because I don't understand what you are asking – GalloCedrone Mar 14 '18 at 16:15
  • Sorry if my problem is not addressed clear enough, but as I said on the top the issue is load balancer only send traffic to one pod when sticky session is enabled. I tested with 2 different clients that should be handled by different pod. – spondbob Mar 14 '18 at 22:05
  • @GalloCedrone I have updated my question with a diagram, hope that makes it clear of what I am trying to do – spondbob Mar 16 '18 at 06:22

2 Answers2

1

This happen, because you are using sessionAffinity: ClientIP, this is the affinity on the service and is ip based, so the service get the ip of your loadbalancer, try to use sessionAffinity: None and if you want to use sticky session use nginx ingress controller

c4f4t0r
  • 5,301
  • 3
  • 31
  • 42
0

Have you tried to test your apps with a bigger amount of clients than your mobile and your laptop?
Maybe you could test it from several Google compute engine instances.

Since you are implementing both sticky session and load balancing with ip_hash you have a 50% chance that two devices get served by the very same pod and even if you reload the page it will be always served by the same pod till you change ip.

With ip-hash, the client’s IP address is used as a hashing key to determine what server in a server group should be selected for the client’s requests. This method ensures that the requests from the same client will always be directed to the same server except when this server is unavailable. http://nginx.org/en/docs/http/load_balancing.html

Django
  • 422
  • 2
  • 5
  • To be honest I am not sure if I should enable sticky session on both ends, and it is kind of difficult as well to test it with many clients at the same time that can tell traffic is distributed but sticky session also work. – spondbob Mar 16 '18 at 00:49
  • I have updated my question with a diagram, hope that makes it clear of what I am trying to do – spondbob Mar 16 '18 at 06:23