I'm trying to start looking into K8s and in doing so I've managed to confuse myself.
It seems like a fairly common architecture to have the following, when using a kubernetes provider such as AWS or DO:
Cloud External Load Balancer -> [CLUSTER ENTRYPOINT] Nginx Ingress Controller -> Service A
-> Service B
-> Service C
The point being that the cloud load balancer routes traffic to the nginx ingress controller, which terminates SSL and forwards to the various services depending on the path of the request.
What I don't understand is that at this point do we not reduce the efficacy of the external load balancer? Sure, it will distribute traffic across the ingress controller replicas, but that is all it will be able to do because that's all it knows about. It won't actually be able to do any load balancing for a particular service across that service's pods.
- Is this both correct and a valid production setup?
- Is it generally accepted that the load balancer will only be distributing traffic to the reverse proxy? I can't get over the idea that this is sort of a waste of a load balancer given that nginx can act as a load balancer itself, but I've no idea if that's actually correct. I may have completely misunderstood the concept of an ingress controller.
- If the above is correct, is that where a service mesh like linkerd comes in? Presumably with something like Linderd, traffic from the nginx ingress to a particular service will be load balanced amongst the pods of that service effectively by linkerd.