0

In reading the documentation for Docker Swarm 1.12 there is a section describe how to configure HAProxy to load balance traffic to swarm hosts.

https://docs.docker.com/engine/swarm/ingress/#/configure-an-external-load-balancer

If I understand Docker Swarm > 1.12 there shouldn't be a need to setup a load balancer in this way because Swarm has an internal load balancer and DNS.

Wouldn't a proper approach be to stand up a reverse proxy to the service name (DNS alias) and let the Swarm load balancer do the work?

For example in nginx you could do:

location /somepath/ {
    proxy_read_timeout 900;                
    proxy_pass http://service-name/;
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
}

My assumption is that the service is deployed to a private network overlay and the service that needs to be exposed externally is deployed to the private network and a proxy network where the nginx or haproxy service is also deployed.

docker service create 
 --name recurrence-service \
 --replicas 3 \
 --network my-service \
 --network proxy  \
 mycompany/my-web-server
030
  • 5,901
  • 13
  • 68
  • 110
Brett Mathe
  • 181
  • 5

2 Answers2

0

I think your approach sounds good and I don't think there is any requirement for an external LB. We put an ELB in front of ours but that's more to keep it standard with other services and have a central place where we do SSL.

tweeks200
  • 351
  • 1
  • 3
  • 11
0

Swarm Mode includes internal DNS and a routing mesh to provide service discovery and forwarding of requests between nodes to one that is running the service. This means all nodes will listen on a published port (when you use the default "ingress" mode), and that request will be forwarded internally between swarm nodes.

However, when accessing the service externally, a load balancer is still recommended to send requests to one of the many nodes. This is for HA since any one node can be down while the others take over. If you send all requests to just a single node and that node fails, you would be unable to access your service even though other nodes are still available and serving requests.

As a fall back, you can use round-robin DNS configured to resolve to multiple docker hosts. However, this is less ideal for a few reasons:

  • An outage still needs to timeout before most applications would try another node
  • A partial outage where the node is receiving connections but failing all requests will result in failures (a load balancer can be configured to check the health of the target before sending traffic)
BMitch
  • 5,966
  • 1
  • 25
  • 32