0

While trying to port my application which is running on docker Swarm locally to Azure Container Service I am struck on the load balancer part of the Azure. Locally I have a container instance of HAproxy running on Swarm Master and multiple web containers running. Web containers have just exposed the ports and they are not mapped to machines on which they are running. HAproxy container has mapped port to the master and internally is talking to my web containers for load balancing. This gives me the leverage to run any number of containers with limited number of workers in Docker Swarm. In azure container service I see that Azure load balancer will talk to only ports that are mapped, that means that I can only run 1 container per agent or I keep an internal load balancer in my containers, which implies that users will be going through 2 load balancers before hitting my application.

Not an ideal scenario when my application uses sticky sessions. So Apparently Microsoft's statement "Everything works same in Azure containers" goes for a toss ? what are the solutions available or am I doing something wrong here?

Regards, Harneet

  • If you have two agents, you will have two vms in the agents pool not two containers in the pool. So I think you can have many containers and choose appropriately the resources of VMs for agents you would need the capacity for - And after that let ACS provided LB handle the balancing without the need of HAProxy. – hB0 Feb 19 '17 at 01:18

1 Answers1

0

The solution in ACS is almost identical. Use HAProxy and have the Azure LB talk to that. The only difference is that you will not be running the proxy on the master, you will have Swarm deploy it to an agent for you.

You shouldn't really be running workloads on your masters. What would you do if you have a DDoS attack and can't reach your masters, for example. Having Swarm deploy the proxy for you means that you can also have swarm monitor the health of the proxy.

You could, if you really wanted to, run the proxy on the master as you do now. The solution would be the same, have the Azure LB provide a public connection to the proxy just as you currently do.

rgardler
  • 592
  • 3
  • 7
  • That does not sound too convincing, each request is passed through 2 load balancers before hitting the actual web application. Also what happens to multiple applications in cluster, is it possible to add multiple load balancers in Azure to listen to different applications in same swarm. – Harneet Singh Jan 13 '17 at 05:35
  • Well the traffic has to get into the Azure networks somehow. We find most customers trade the extra network hop for the ability to migrate container workloads to any infrastructure. However, those customers who want to minimize the network hops can use Azure Traffic Manager which will LB across individual containers in the cluster. – rgardler Jan 15 '17 at 21:36
  • Sorry, I should have also mentioned Application Gateway as an options since it's cheaper ;-) – rgardler Jan 16 '17 at 06:04