0

I am trying to implement docker swarm across 3 nodes(node1(manager), node2(worker), node3(worker)) on Digital Ocean. I have 2 django servers( python images) running in containers and 1 postgresDb container.

Because the code is in node1 and I had bind mounted a directory in node1. Both the django containers are being created only in node1.

For postgres container I am able to attach a volume (ie. volume mount), so the postgres containers are being spread across the 3 nodes.

The main issue I think is because the code is in node1 and I am hard coding ie bind mounting directory path in node1, these django containers are not being created in other nodes where as for postgres container as I am using a volume its containers are spreading across nodes.

Is there a work around by which I can volume bind the code directory for each of django servers or maybe copy all the code to a volume ? But that does not seem efficient and may take a lot of time as code base grows.

Below I am mentioning my docker-stack.yml file with comments mentioning what I talked about above:

version: "3"
services: 
  tatkal_website:    
    image: sourabhkondapaka/tatkal_website
    restart: on-failure 

    deploy:
      replicas: 3
      update_config:
        parallelism: 2
        delay: 10s

    links:
      - "tatkal_booking_service:bookingService"


    networks:
      - net

    ports: 
      - 8000:8000

    volumes: 
      - ./WebSite/:/home/project/      # Bind mounting ./Website (django server directory)

    depends_on: 
      - db    

    command: >
     sh -c "python3 manage.py makemigrations && python3 manage.py migrate && python3 manage.py runserver 0.0.0.0:8000"

  tatkal_booking_service:

    image: sourabhkondapaka/tatkal_booking_service   
    restart: on-failure 

    ports: 
      - 8001:8000

    volumes: 
      - ./Booking_Service/:/home/project/ #Bind mounting ./Booking_Service (django serverdirectory)

    networks:
      - net


    deploy:
      replicas: 3
      update_config:
        parallelism: 2
        delay: 10s

    depends_on: 
      - db    

    command: >
     sh -c "python3 manage.py makemigrations && python3 manage.py migrate && python3 manage.py runserver 0.0.0.0:8000"  

  db:

    image: sourabhkondapaka/tatkal_postgres_db  
    restart: on-failure 


    networks:
      - net

    deploy:
      replicas: 4
      update_config:
        parallelism: 2
        delay: 10s


    ports: 
      - 7890:5432

    volumes: 
      - data:/var/lib/postgresql/data       # Volume mounted postgres container. So no issues.

  visualizer:
    image: dockersamples/visualizer:stable
    ports:
      - "8080:8080"
    stop_grace_period: 1m30s
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"
    deploy:
      placement:
        constraints: [node.role == manager]

networks:
  net:


volumes:
  data:
Gru
  • 817
  • 13
  • 20
  • You should build your application code into your images, and not try to bind-mount it in; that's both a best practice and avoids the problem you're encountering. – David Maze Dec 05 '19 at 11:31
  • But it could lead to large container sizes right ? – Gru Dec 05 '19 at 11:37
  • I've only ever had problems with an image being "too big" when it gets above a gigabyte or so. A typical Python-based Web application that's not trying to include a machine-learning model won't have an issue. – David Maze Dec 05 '19 at 11:52
  • Oh. Thank for the approach will try. An other way I found is to copy all the code into a docker volume and mount this volume in .yml file instead of bind mount. But in this case, the docker is not able to find any files copied into the volume for some reason. Reference link : https://stackoverflow.com/questions/37468788/what-is-the-right-way-to-add-data-to-an-existing-named-volume-in-docker – Gru Dec 06 '19 at 04:01

0 Answers0