0

I have 4 nodes - one is manager and three are workers. On my three worker nodes I have configured lsyncd with rsync -u flag (so it is not syncing data if on the remote folder the version of file is newer) and delete=false. The daemon syncs /home/user/mydocker/vaultwarden/data across all worker nodes bidirectional. Syncing works stunning (I also tried with GlusterFS).

My idea is having only one replica on my worker node, and in case of failure Docker swarm gets UP a service on another node, and with synced information I should get the same copy of Vaultwarden with data inside. And it works with one exception - seems that once, for example, when I reboot the node, where service is, Docker redeploys the container on another node and it gets the data from some kinda cache which replaces all the data on my synced folder and since there the data has newer version, lsyncd syncs data to the other nodes. So, in this case I get a clear Vaultwarden without any data or if there was some data before it restores to the previous version. BUT if I manually get up the Vaultwarden with docker compose and then turn off the node (simulate a failure for example) and making the service UP on another node with docker compose, all works like a charm - the data persists and syncs without any problems.

My yaml config for the deploy:

version: '3'
services:
  vaultwarden:
    image: vaultwarden/server:latest
    environment:
      - ADMIN_TOKEN=XXXXXXXXXXXXX
      - SIGNUPS_ALLOWED=true
    volumes:
       - /home/user/mydocker/vaultwarden/data:/data
    ports:
     - "8877:80"
    deploy:
      placement:
        constraints:
          - "node.role==worker
      mode: replicated
      replicas: 1
Timophei
  • 1
  • 1
  • Please clarify your specific problem or provide additional details to highlight exactly what you need. As it's currently written, it's hard to tell exactly what you're asking. – Community Jan 12 '23 at 11:28

0 Answers0