0

I have just started using docker . I was able to create a docker compose file which deploys three components of my application ,with the necessary number of replications in one host . I want to replicate the same same thing ,with multiple hosts now . I have three processes A[7 copies ] ,B [ 1 copy] ,C [1 Copy] I followed the creating swarm tutorial on the docker website ,and managed to create a manager and attach two workers to it .

So now when I run my command

 docker stack deploy --compose-file docker-compose.yml perf

It does spawn the required number of machines ,but all of them in the manager itself . I would ideally want them to spawn C and B in the manager and ann the copies of A distributed between worker 1 and worker 2.
Here is my docker -compose file

version: '3'

services:

  A:
    image: A:host
    tty: true
    volumes:
      - LogFilesLocationFolder:/jmeter/log
      - AntOutLogFolder:/antout
      - ZipFilesLocationFolder:/zip
    deploy:
      replicas: 7
      placement:
        constraints: [node.role == worker]
    networks:
      - perfhost

  B:
    container_name: s1_perfSqlDB
    restart: always
    tty: true
    image: mysql:5.5
    environment:
      MYSQL_ROOT_PASSWORD: ''
    volumes:
      - mysql:/var/lib/mysql
    ports:  
      - "3306:3306"
    deploy:
      placement:
        constraints: [node.role == manager]
    networks:
       - perfhost

  C:
    container_name: s1_scheduler
    image: C:host
    tty: true
    volumes:
      - LogFilesLocationFolder:/log
      - ZipFilesLocationFolder:/zip
      - AntOutLogFolder:/antout
    networks:
      - perfhost
    deploy:
      placement:
        constraints: [node.role == manager]
    ports:
      - "7000:7000"


networks:
  perfhost:

volumes:
     mysql:
     LogFilesLocationFolder:
     ZipFilesLocationFolder:
     AntOutLogFolder:

B) And if I do get this working ,how do I use volumes to transfer data between Conatiner for Service A and container for Service B ,given that they are on different host machines

Bret Fisher
  • 8,164
  • 2
  • 31
  • 36
Tanmay Bhattacharya
  • 551
  • 1
  • 5
  • 16

3 Answers3

2

A few tips and answers:

  • for service names I don't recommend capital letters. Use valid DNS hostnames (lowercase, no special char except -).
  • container_name isn't supported in swarm and shouldn't be needed. Looks like C: should be something like scheduler, etc. Make the service names simple so they are easy to use/remember on their virtual network.
  • All services in a single compose file are always on the same docker network in swarm (and docker-compose for local development), so no need for the network assignment or listing.
  • restart:always isn't needed in swarm. That setting isn't used and is the default anyways. If you're using it for docker-compose, it's rarely needed as you usually don't want apps in a respawn loop during errors which will usually result in CPU race condition. I recommend leaving it off.
  • Volumes use a "volume driver". The default is local, just like normal docker commands. If you have shared storage you can use a volume driver plugin from store.docker.com to ensure shared storage is connected to the correct node.
  • If you're still having issues with worker/manager task assignment, put the output of docker node ls and maybe docker service ls and docker node ps <managername> for us to help troubleshoot.
Bret Fisher
  • 8,164
  • 2
  • 31
  • 36
  • Thanks alot @Bret ,this clears up a lot of things, I did get the swarm working ,there was a problem witht he container images its working now. I read one of your other answers where you recommend Rex-ray ,ill try using that for the persistence . Just wondering though ,does rex Ray allow for multiple clients to read and write on a common storage volume ? or Can I achieve that through the volume driver plugin ? – Tanmay Bhattacharya May 03 '18 at 05:17
  • Basically I need to transfer data from service C to the 7 copies of service A ,Even if I use individual volumes ,so at any point of time two services A and C would be connected to the volume. I don't think RexRay allows this ,is there some other plugin I can use for this ? Flocker maybe ? – Tanmay Bhattacharya May 03 '18 at 06:03
  • In the case of most volume drivers, they just enable you to use whatever underlying *shared storage* you want. The features of *that* storage will dictate if you can have multiple read/write connections to that storage volume. For example: AWS EBS and DigitalOcean Block Storage would only let one container connect at a time (one-to-one relationship). AWS EFS would allow many-to-one due to it's design as NFS network storage. REX-Ray just automates the connection process. – Bret Fisher May 03 '18 at 15:59
  • Flocker is dead. If you need full file replication, consider portworx.com but it still won't do open files (like databases) correctly I think. For that you want app-level replication (database clusters, etc.) – Bret Fisher May 03 '18 at 16:00
  • My primary requirment is to have multiple applicatios share some files before trying to dockerize the whole set-up I was using a simple shared folder in Windows for the purposes. From what I understand I think a NFS partition should do the work for me ? – Tanmay Bhattacharya May 04 '18 at 14:31
0

First you should run docker node ls And check if all of your nodes are available. If they are, you should check if the workers have the images they need to run the containers. I would also try with a constraint using the id of each node instead, you can see the ids with the previous command.

namokarm
  • 668
  • 1
  • 7
  • 20
  • hi ,yes I did run the command as instructed in the documentation to validate ,the workers are connected to manager . To connect I had used the command with comes as output when I fire up the manager . – Tanmay Bhattacharya May 02 '18 at 18:18
  • @Imrak I waswondering if the docker-compose document is proper or not, because in the documet I havenent mentioned anywhere to distribute the service A between 2 workers ,what if there are multiple workers present ,does docker take care of that on its own ? – Tanmay Bhattacharya May 02 '18 at 18:19
  • Yes, a single docker service with multiple replicas will scale out by default. – Bret Fisher May 03 '18 at 00:49
  • As @BretFisher mentioned, docker will scale by default. Note - the exception is, when you are running a service in global mode, where each node will run a single instance. https://docs.docker.com/engine/reference/commandline/service_scale/#extended-description – namokarm May 03 '18 at 13:54
0

Run before docker stack deploy:

mkdir /srv/service/public
docker run --rm -v /srv/service/public:/srv/service/public my-container-with-data cp -R /var/www/app/public /srv/service/public

Use direcory /srv/service/public as volume in containers.

4n70wa
  • 337
  • 4
  • 19