0

I have a swarm cluster containing 4 nodes : 1 Manager + 3 Workers

When restarting one worker'server, its status becomes "DOWN" when running :

docker node ls

Also already deployed services shutdown in this node (containers exited), and cannot restart it. I have tried to :

  • recreate cluster after each reboot (too ugly and doesn't resolve the problem )
  • deleting the heavy file /var/lib/docker/swarm/worker/tasks.db (doesn't improve the situation)
  • simply waiting (but it still down after hours)

I m using docker 18.09ce

Suggestions ?

SmartTom
  • 691
  • 7
  • 14
firasKoubaa
  • 6,439
  • 25
  • 79
  • 148

1 Answers1

0

There are few things you have to do.

  1. Update node availability ( Run command from manager node)

    docker node update <> --availability active

  2. If still issue persists then try to do following things.

    // Add worker again to swarm using token previously generated.

  3. If still not solve then you might to do following thing Remove all nodes from cluster.

    docker swarm init --force-new-cluster // Use with care.

    Recover docker swarm

dotnetstep
  • 17,065
  • 5
  • 54
  • 72