7

docker-compose.yml This is my docker-compose file used to deploy the service in multiple instance using the docker-stack. As you can see the the app service which is the laravel running in 2 nodes and database (mysql) in one of the nodes.

Full Code Repository: https://github.com/taragurung/Ci-CD-docker-swarm

version: '3.4'
networks:
  smstake:   
    ipam:
      config:
        - subnet: 10.0.10.0/24

services:
    db:
        image: mysql:5.7
        networks:
          - smstake
        ports:
          - "3306"
        env_file:
          - configuration.env
        environment:
          MYSQL_ROOT_PASSWORD: ${DB_ROOT_PASSWORD}
          MYSQL_DATABASE: ${DB_NAME}
          MYSQL_USER: ${DB_USER}
          MYSQL_PASSWORD: ${DB_PASSWORD}
        volumes:
          - mysql_data:/var/lib/mysql
        deploy:
          mode: replicated
          replicas: 1

    app:
        image: SMSTAKE_VERSION
        ports:
          - 8000:80
        networks:
          - smstake
        depends_on:
          - db
        deploy:
          mode: replicated
          replicas: 2

The problems I am facing. 1. Though the service are in running state when I check the logs of the service I can see the migrations in successful in only one nodes and not running in another node. See the logs bellow

  1. When I make the app service run only in manager node putting constraints the appliations works great. I can login to page and do everything but When I make the app service run in any node using just replicas than login page is showing up but when try to login it redirects to NOT FOUND page

Here is the full logs when trying to run on 3 nodes. Bellow is sample when running on 2 nodes. You can see migration issues in details https://pastebin.com/wqjxSnv2

Service logs checked using docker service logs <smstake_app>

| Cache cleared successfully.
    | Configuration cache cleared!
    | Dropped all tables successfully.
    | Migration table created successfully.
    | 
    | In Connection.php line 664:
    |                                                                                
    |   SQLSTATE[42S02]: Base table or view not found: 1146 Table 'smstake.migratio  
    |   ns' doesn't exist (SQL: insert into `migrations` (`migration`, `batch`) val  
    |   ues (2014_10_12_100000_create_password_resets_table, 1))                     
    |                                                                                
    | 
    | In Connection.php line 452:
    |                                                                                
    |   SQLSTATE[42S02]: Base table or view not found: 1146 Table 'smstake.migratio  
    |   ns' doesn't exist                                                            
    |                                                                                
    | 
    | Laravel development server started: <http://0.0.0.0:80>
    | PHP 7.1.16 Development Server started at Thu Apr  5 07:02:22 2018
    | [Thu Apr  5 07:03:56 2018] 10.255.0.14:53744 [200]: /js/app.js



    | Cache cleared successfully.
    | Configuration cache cleared!
    | Dropped all tables successfully.
    | Migration table created successfully.
    | Migrating: 2014_10_12_000000_create_users_table
    | Migrated:  2014_10_12_000000_create_users_table
    | Migrating: 2014_10_12_100000_create_password_resets_table
    | Migrated:  2014_10_12_100000_create_password_resets_table
    | Migrating: 2018_01_11_235754_create_groups_table
    | Migrated:  2018_01_11_235754_create_groups_table
    | Migrating: 2018_01_12_085401_create_contacts_table
    | Migrated:  2018_01_12_085401_create_contacts_table
    | Migrating: 2018_01_12_140105_create_sender_ids_table
    | Migrated:  2018_01_12_140105_create_sender_ids_table
    | Migrating: 2018_02_06_152623_create_drafts_table
    | Migrated:  2018_02_06_152623_create_drafts_table
    | Migrating: 2018_02_21_141346_create_sms_table
    | Migrated:  2018_02_21_141346_create_sms_table
    | Seeding: UserTableSeeder
    | Laravel development server started: <http://0.0.0.0:80>
    | PHP 7.1.16 Development Server started at Thu Apr  5 07:03:23 2018
    | [Thu Apr  5 07:03:56 2018] 10.255.0.14:53742 [200]: /css/app.css

I don't know if its due to migration problem or what. Sometime I can login and after few time I get redirected to Not found page again when clicking on the link inside dashboard.

enter image description here

Tara Prasad Gurung
  • 3,422
  • 6
  • 38
  • 76
  • I think you should restrict the node for your mysql db because if that changes its node then the new DB would be blank and in inconsistent state. DBs should either be external or they should fixed on to one node. – Tarun Lalwani Apr 08 '18 at 08:42
  • @TarunLalwani Ok I thought the same and tried to run it in manager node only. But when the app service is running it is trying to migrate the database in each and every node it is running as I have added the `entrypoint` with migration commands. – Tara Prasad Gurung Apr 08 '18 at 10:34
  • You should have a one of service which does the migration and it can be launched on any script and it should be ok to end. It is upto you to run that as a part of the full service or as another migration service – Tarun Lalwani Apr 08 '18 at 10:37
  • Making the database run particular node. (done). The app is trying to run migration in multiple nodes not just in the node where database is added. The `depends_on` should have done that task right. Here is few changes I have done to run the migration separately. https://pastebin.com/m69ChKC2 – Tara Prasad Gurung Apr 08 '18 at 11:04
  • So is there still an issue or you are just showing the changes? – Tarun Lalwani Apr 08 '18 at 11:09
  • @TarunLalwani its trying to run migration command to every instance. and thus the error. Planning to detect the instance first before running the migration command – Tara Prasad Gurung Apr 08 '18 at 11:10
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/168493/discussion-between-tarun-lalwani-and-tara-prasad-gurung). – Tarun Lalwani Apr 08 '18 at 11:10

1 Answers1

5

So I ran your service and found few issues.

  • The user in the docker-compose.yml in mysql was different. This may just have been for posting purpose though
  • In your Dockerfile, you had used ENTRYPOINT which caused the same command to run on migration service also. I changed it to CMD
  • You didn't run the migration service in the same network as your mysql db. So mysql was not reachable from the same.

This is final compose file i used

docker-compose.yml

version: '3.4'

networks:
  smstake:


services:
    db:
        image: mysql:5.7
        networks:
          - smstake
        ports:
          - "3306"
        environment:
          MYSQL_ROOT_PASSWORD: password
          MYSQL_DATABASE: smstake
          MYSQL_USER: tara
          MYSQL_PASSWORD: password
        volumes:
          - mysql_data:/var/lib/mysql
        deploy:
          mode: replicated
          replicas: 1
          placement:
            constraints:
              - node.role == manager


    app:
        image: 127.0.0.1:5000/myimage:latest
        ports:
          - 8000:80
        networks:
          - smstake
        depends_on:
          - db
          - migration
        deploy:
          mode: replicated
          replicas: 3

    migration:
        image: 127.0.0.1:5000/myimage:latest
        command: sh -xc "sleep 10 && pwd && php artisan migrate:fresh 2>&1"
        networks:
          - smstake
        depends_on:
          - db
        deploy:
          restart_policy:
            condition: on-failure
          mode: replicated
          replicas: 1
          placement:
            constraints:
              - node.role == manager


volumes:
    mysql_data:

Dockerfile

FROM alpine

ENV \
  APP_DIR="/project" \
  APP_PORT="80"

# the "app" directory (relative to Dockerfile) containers your Laravel app...
##COPY app/ $APP_DIR
# or we can make the volume in compose to say use this directory

RUN apk update && \
    apk add curl \
    php7 \
    php7-opcache \
    php7-openssl \
    php7-pdo \
    php7-json \
    php7-phar \
    php7-dom \
    php7-curl \
    php7-mbstring \
    php7-tokenizer \
    php7-xml \
    php7-xmlwriter \
    php7-session \
    php7-ctype \
    php7-mysqli \
    php7-pdo \
    php7-pdo_mysql\
    && rm -rf /var/cache/apk/*

RUN curl -sS https://getcomposer.org/installer | php -- \
  --install-dir=/usr/bin --filename=composer

##RUN cd $APP_DIR && composer install

RUN mkdir /apps
COPY ./project /apps
RUN cd /apps && composer install

WORKDIR /apps

RUN chmod -R 775 storage
RUN chmod -R 775 bootstrap

copy ./run.sh /tmp
CMD ["/tmp/run.sh"]

And then ran the service again. Then migration went fine

Migration

And the app worked too

App Working

Tarun Lalwani
  • 142,312
  • 9
  • 204
  • 265
  • I am into such issues. `smstake_migration.1.v2vqeq4nqzwb | In Connection.php line 664: | | SQLSTATE[HY000] [2002] Operation timed out (SQL: SHOW FULL TABLES WHERE tab | le_type = 'BASE TABLE')` I have made the migration service depends on app service if made to depend on 'db' its not even running. In both the case I got the issues – Tara Prasad Gurung Apr 10 '18 at 10:14
  • The issue is because of cached volume container of mysql. I would suggest create a new cluster and try. I had similar issue and i created new cluster in http://play-with-docker.com and test and it works. Change the name of the mysql data container and then try again. And keep the dependency like i did. Also make sure no cached images are present – Tarun Lalwani Apr 10 '18 at 10:17
  • How to make sure its not using the cached image from the private repo. I guess it might be pulling from there though I am building the image with `--no-cache`. I am still redirected to the not found page on login or user registration. I have updated the error page above – Tara Prasad Gurung Apr 10 '18 at 10:58
  • 1
    That may be happening because of session not being shared across nodes and its an app issue no. So your original issue is solved – Tarun Lalwani Apr 10 '18 at 11:31
  • The way it is working now was also working previously with additional migrations problem which is being solved. Are you ensuring its an issue with APP – Tara Prasad Gurung Apr 10 '18 at 11:39
  • If I am not wrong may be because the session is not being shared among all the nodes. Let me know if you find the Full solution for now accepting your answers. Thanks massive – Tara Prasad Gurung Apr 10 '18 at 11:48
  • You are using `SESSION_DRIVER=file` which cannot work in a cluster mode? Try using `database` for now and that should fix the issue – Tarun Lalwani Apr 10 '18 at 11:56
  • That didn't solve either changing the SESSION_DRIVER type – Tara Prasad Gurung Apr 10 '18 at 12:35
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/168644/discussion-between-tarun-lalwani-and-tara-prasad-gurung). – Tarun Lalwani Apr 10 '18 at 12:39