1

I have 3 node in docker swarm mode where.

1 Manager and 2 Worker nodes

I got a 2 doubts here ?

  1. What will be the effect if I scale the service to number greater than nodes we have. (Suppose scaling one service to 5 or 6)

  2. What will happen to the scale if the service has a constraint that it must be running on the manager nodes only and we scale the number higher than number of manager nodes .(For example scaling to 3)

I have mysql service running in manager node which have a constraint placement defined in docker-compose to run in manager node. This is the effect I see when I try to scale it to 6 though I have only 3 nodes. The docker service logs shows 4/6 running and is even fluctuation sometime showing 6/6 and sometime 5/6

Here the docker-compose.yml

version: '3.4'

networks:
  smstake:   
    ipam:
      config:
        - subnet: 10.0.10.0/24


services:

    db:
        image: mysql:5.7
        networks:
          - smstake
        ports:
          - "3306"
        environment:
          MYSQL_ROOT_PASSWORD: password
          MYSQL_DATABASE: mydb
          MYSQL_USER: myuser
          MYSQL_PASSWORD: password
        volumes:
          - mysql_data_2:/var/lib/mysql
        deploy:
          mode: replicated
          replicas: 1
          placement:
            constraints:
              - node.role == manager
    app:
        image: SMSTAKE_VERSION
        ports:
          - 8000:80
        networks:
          - smstake
        depends_on:
          - db
        #  - migration
        deploy:
          mode: replicated
          replicas: 3

    migration:
        # build: .
        image: SMSTAKE_VERSION
        command: sh -xc "sleep 10 && pwd && php artisan migrate:fresh --seed 2>&1"
        networks:
          - smstake
        depends_on:
          - app
          - db
        deploy:
          mode: replicated
          replicas: 1
          placement:
            constraints:
              - node.role == manager
volumes:
    mysql_data_2:
Tara Prasad Gurung
  • 3,422
  • 6
  • 38
  • 76
  • It is fluctuating most probably because some instances fail for different reasons (internal errors, corrupt data, no memory available etc). – Constantin Galbenu Apr 16 '18 at 07:45
  • 1
    A note regarding databases: do not scale to more than 1 instance. Databases are not scaled in this way, you need to use a replica-set (like MongoDB) or whatever replicating technology is the database using. Otherwise, your other instances will fail or have duplicate/inconsistent data. – Constantin Galbenu Apr 16 '18 at 07:47
  • Yes for now I am doing this for the testing the scale actions. Thanks for the beautifule advice. – Tara Prasad Gurung Apr 16 '18 at 07:49
  • Test with other type of container, not a database. – Constantin Galbenu Apr 16 '18 at 07:49
  • If you detail your node setup and show your compose file and commands used, we can try to replicate the issue. A constraint will not run on a un-matched node... but in your case, unless you have a big instance size, those replicas will likely crash due to lack of resources, causing them to re-create over and over. – Bret Fisher Apr 17 '18 at 02:19
  • @BretFisher Yes I have included the docker-composer.yml file that I am using – Tara Prasad Gurung Apr 17 '18 at 06:44
  • If you're still having a running replica count that is inconsistent with your declarative request in the compose file, then you'll need to troubleshoot why they are exiting using logs and inspect command. more info: https://stackoverflow.com/a/49868818/749924 – Bret Fisher Apr 17 '18 at 16:14

1 Answers1

1

What will be the effect if I scale the service to number greater than nodes we have. (Suppose scaling one service to 5 or 6)

If you scale beyond the node size, then some nodes will run multiple instances of the same service. Which nodes is chosen by Docker-swarm depending on the constraints that you have imposed: it picks randomly one that matches the constraints.

If there are no nodes that match the constraints then the service is not scaled (it has zero instances). Where more nodes are added or when the constraints are changed, Docker-swarm checks to see if it may start a new instance. Docker-swarm always tries to follow the desired state (if the service has scale=3 then Docker-swarm will start 3 instances on whatever nodes match the constraints, even it is only one node, it will start 3 instances on that node).

What will happen to the scale if the service has a constraint that it must be running on the manager nodes only and we scale the number higher than number of manager nodes .(For example scaling to 3)

The same effect as above.

Constantin Galbenu
  • 16,951
  • 3
  • 38
  • 54
  • But what if I have only single manager node and the scale is 3 with constraint. So the constraint gets neglected in such cases – Tara Prasad Gurung Apr 16 '18 at 07:39
  • @TaraPrasadGurung The constraints never get "neglected". If there are no nodes that match the constraints then the service is not scaled (it has zero instances) – Constantin Galbenu Apr 16 '18 at 07:40
  • I have only 1 manager node and the mysql is made to run in manager node only. I tried to scale it to 6 nodes and I can see its been replicated to 4 nodes. With fluctuating behavior sometime 5 and sometime even 6. It looks like the constraint is neglected – Tara Prasad Gurung Apr 16 '18 at 07:46
  • @TaraPrasadGurung A note regarding databases: do not scale to more than 1 instance. Databases are not scaled in this way, you need to use a replica-set (like MongoDB) or whatever replicating technology is the database using. Otherwise, your other instances will fail or have duplicate/inconsistent data – Constantin Galbenu Apr 16 '18 at 07:48
  • Ah, ok Tara, I think you're misusing the word "node" that got me tripped up on what was happening, as I thought you meant the replicas were launching on other nodes besides manager. A "node" is a host OS running docker in Swarm. A "replica" or "task" is the container running in a Service. If you scale your `app` service to 4 replicas but have a constraint of `node.role == manager` then you'll see 4 `app` tasks (containers) on the manager node. – Bret Fisher Apr 17 '18 at 16:12