While trying to port my application which is running on docker Swarm locally to Azure Container Service I am struck on the load balancer part of the Azure. Locally I have a container instance of HAproxy running on Swarm Master and multiple web containers running. Web containers have just exposed the ports and they are not mapped to machines on which they are running. HAproxy container has mapped port to the master and internally is talking to my web containers for load balancing. This gives me the leverage to run any number of containers with limited number of workers in Docker Swarm. In azure container service I see that Azure load balancer will talk to only ports that are mapped, that means that I can only run 1 container per agent or I keep an internal load balancer in my containers, which implies that users will be going through 2 load balancers before hitting my application.
Not an ideal scenario when my application uses sticky sessions. So Apparently Microsoft's statement "Everything works same in Azure containers" goes for a toss ? what are the solutions available or am I doing something wrong here?
Regards, Harneet