I have docker swarm cluster with 2 nodes on AWS. I stopped the both instances and initially started swarm manager and then worker. Before stopped the instances i had a service running with 4 replicas distributed among manager and worker.
When i started swarm manager node first all replica containers started on manager itself and not moving to worker at all.
Please tell me how to do load balance?
Is swarm manager not responsible to do when worker started?

- 351
- 1
- 2
- 10
-
can you restart the service? they probably all got scheduled on the manager before the other worker was up – avigil Mar 27 '18 at 16:50
-
Do i need to restart docker service or docker stack? – Bukkasamudram Mar 27 '18 at 16:52
-
I tried restarting docker service and then tried to re-deploy the stack. No luck. – Bukkasamudram Mar 27 '18 at 17:19
-
2I hope manually stopping the containers from swarm manager new containers may move to worker to load balance, but is there any other way to balance automatically by swarm manager itself? – Bukkasamudram Mar 27 '18 at 17:24
-
i dont think swarm will move containers that already exist and are actively running because it has no way of knowing if doing so will cause disruption to your application – avigil Mar 27 '18 at 20:08
4 Answers
Swarm currently (18.03) does not move or replace containers when new nodes are started, if services are in the default "replicated mode". This is by design. If I were to add a new node, I don't necessarily want a bunch of other containers stopped, and new ones created on my new node. Swarm only stops containers to "move" replicas when it has to (in replicated mode).
docker service update --force <servicename>
will rebalance a service across all nodes that match its requirements and constraints.
Further advice: Like other container orchestrators, you need to give capacity on your nodes in order to handle the workloads of any service replicas that move during outages. You're spare capacity should match the level of redundancy you plan to support. If you want to handle capacity for 2 nodes failing at once, for instance, you'd need a minimum percentage of resources on all nodes for those workloads to shift to other nodes.

- 8,164
- 2
- 31
- 36
-
Would you know if this is a rolling update? Will this cause a downtime? – Alec Gerona Apr 07 '21 at 03:44
-
2To answer my own question: https://docs.docker.com/engine/reference/commandline/service_update/#perform-a-rolling-restart-with-no-parameter-changes – Alec Gerona Jun 11 '21 at 04:47
Here's a bash script I use to rebalance:
#!/usr/bin/env bash
set -e
EXCLUDE_LIST="(_db|portainer|broker|traefik|prune|logspout|NAME)"
for service in $(docker service ls | egrep -v $EXCLUDE_LIST |
awk '{print $2}'); do
docker service update --force $service
done

- 4,505
- 3
- 41
- 54
Swarm doesn't do auto-balancing once containers are created. You can scale up/down once all your workers are up and it will distribute containers per your config requirements/roles/etc.
see: https://github.com/moby/moby/issues/24103
There are problems with new nodes getting "mugged" as they are added. We also avoid pre-emption of healthy tasks. Rebalancing is done over time, rather than killing working processes. Pre-emption is being considered for the future.
As a workaround, scaling a service up and down should rebalance the tasks. You can also trigger a rolling update, as that will reschedule new tasks.

- 9,112
- 2
- 29
- 44
In docker-compose.yml, you can define:
version: "3"
services:
app:
image: repository/user/app:latest
networks:
- net
ports:
- 80
deploy:
restart_policy:
condition: any
mode: replicated
replicas: 5
placement:
constraints: [node.role == worker]
update_config:
delay: 2s
Remark: the constraint is node.role == worker
Using the flag “ — replicas” implies we don’t care on which node they are put on, if we want one service per node we can use “ — mode=global” instead.
In Docker 1.13 and higher, you can use the --force or -f flag with the docker service update command to force the service to redistribute its tasks across the available worker nodes.

- 19
- 3
-
1But i want to load balance containers including manager. I updated my docker-compose.yml with **constraints: [node.role == worker]**. All containers are moved to worker. – Bukkasamudram Mar 27 '18 at 17:33