15

I'm running a Node.js monorepo project using yarn workspaces. File structure looks like this:

workspace_root
    node_modules
    package.json
    apps
        appA
            node_modules
            package.json
        appB
            node_modules
            package.json
    libs
        libA
            dist
            node_modules
            package.json

All apps are independents, but they all require libA

I'm running all these apps with docker-compose. My question here is how to handle properly all the dependencies as I don't want the node_modules folders to be synchronized with host. Locally, when I run yarn install at workspace root, it installs all dependencies for all projects, populating the different node_modules. In docker-compose, ideally each app should not be aware of others apps.

My approach so far, which is working but not ideal and not very scalable.

version: "3.4"

services:
  # The core is in charge of installing dependencies for ALL services. Each service must for wait the core, and then
  # just do their job, not having to handle install.
  appA:
    image: node:14-alpine
    volumes: # We must load every volumes for install
        - .:/app  # Mount the whole workspace structure
        - root_node_modules:/app/node_modules
        - appA_node_modules:/app/apps/appA/node_modules
        - appB_node_modules:/app/apps/appB/node_modules
        - libA_node_modules:/app/libs/libA/node_modules
    working_dir: /app/apps/appA
    command: [sh, -c, "yarn install && yarn run start"]

  appB:
    image: node:14-alpine
    volumes: # We must load every volumes for install
        - .:/app  # Mount the whole workspace structure
        - root_node_modules:/app/node_modules
        - appB_node_modules:/app/apps/appB/node_modules
    working_dir: /app/apps/appB
    command: [sh, -c, "/scripts/wait-for-it.sh appA:4001  -- yarn run start"]

    # And so on for all apps....
  
volumes:
    root_node_modules:
        driver: local
    appA_node_modules:
        driver: local
    appB_node_modules:
        driver: local
    libA_node_modules:
        driver: local

The main drawbacks I see:

  • Service appA is responsible for install dependencies of ALL apps.
  • I have to create a volume for each app + one for the root node_modules
  • The whole project is mounted in each service, even though I'm using only a specific folder

I would like to avoid a build for development, as it has to be done each time you add a dependency, it's quite cumbersome and it's slowing you down

RobC
  • 22,977
  • 20
  • 73
  • 80
Tdy
  • 863
  • 12
  • 28
  • Would you like to also develop using docker with volume mounts? aka watching file changes and reloading the dockerized apps? – Itay Wolfish Jun 26 '22 at 12:32
  • My apps, with files watching, are running inside a docker container. As the source code is mounted in the container, all my local changes are instantly shared in the container and then the changes are detected and the app reloaded. This is working great, being inside a container is pretty transparent for this. – Tdy Jun 27 '22 at 06:48

2 Answers2

0

attaching at the bottom of this answer example repository i have created.

Basically utilizing yarn workspaces i have created a common dockerfile for each of the packages/modules to use when built.

The entire repository is copied for each of the docker images (It's not a good practices for later releasing the product, you would probably want to create a different flow for that)

So if the entire repository is mounted to each of the running services you can watch changes in the libraries (in the repository i have configured nodemon so it will also watch the lib files)

To sum this up:

  1. Hot reload even if libraries are changing because the entire project is mounted to each of the services docker containers
  2. utilizing yarn workspaces to manage the packages easily with convenience commands
  3. For building each of the libraries each time they change they should have each respectively a docker container raised by the docker-compose
  4. Development process is not a good practice for any production related processes like releasing the docker images later since all of the repository is available in the image
  5. Once added the libraries as docker service each with hot reload they will be rebuilt every-time you make a change so no need to docker-compose build repeatedly.

Anyways i would have not worried much about the repeated docker-compose build once the libraries are settled and changes are less frequent you will find your self less rebuilding (But any ways i gave the solution for that also)

Github Repository example

Itay Wolfish
  • 507
  • 3
  • 9
0

I believe that in your case, the best thing you should do is to build your own Docker image instead of using the image from node. So, lets do some coding. First of all, you should tell Docker to ignore node_modules folders. In order to do that, you'll need to create a .dockerignore and a Dockerfile for each of your apps. So, your structure might look like this:

workspace_root
node_modules
package.json
apps
    appA
        .dockerignore
        node_modules
        Dockerfile
        package.json
    appB
        .dockerignore
        node_modules
        Dockerfile
        package.json
libs
    libA
        .dockerignore
        dist
        node_modules
        Dockerfile
        package.json

In the .dockerignore file, you can repeat the same value below.

node_modules/
dist/

That will make docker ignore those folders during the build. And now to the Dockerfile itself. So, in order to make sure your project runs fine inside your container, the best practice is to build your project in the container, and not outside it. It avoids lots of "works fine in my computer" problems. That said, one example of a Dockerfile could be like this:

# build stage
FROM node:14-alpine AS build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build

# production stage
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
COPY prod_nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

In that case I used nginx also, to make sure user gets to the container through a proper webserver. At the end I'll let the prod_nginx.conf also. But the point here, is that you can just build that image and send it to dockerhub, and from there, use it in your docker-compose.yml instead of using a raw node image.

Docker-compose.yml would be like this:

version: "3.4"

services:
  appA:
    image: mydockeraccount/appA
    container_name: container-appA
    port: 
      - "8080:80"
    ....

Now, as promised, the prod_nginx.conf

user                    nginx;
worker_processes        1;
error_log               /var/log/nginx/error.log warn;
pid                     /var/run/nginx.pid;
events {
    worker_connections  1024;
}

http {
    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;
    log_format          main '$remote_addr - $remote_user [$time_local] "$request" '
                             '$status $body_bytes_sent "$http_referer"'
                             '"$http_user_agent" "$http_x_forwarded_for"';
    access_log          /var/log/nginx/access.log main;
    sendfile            on;
    keepalive_timeout   65;
    server {
        listen          80;
        server_name     _ default_server;
        index           index.html;
        location / {
            root        /usr/share/nginx/html;
            index       index.html;
            try_files   $uri $uri/ /index.html;
        }
    }
}

Hope it helps. Best regards.

Huander Tironi
  • 329
  • 4
  • 7