I'm running a Node.js monorepo project using yarn workspaces. File structure looks like this:
workspace_root
node_modules
package.json
apps
appA
node_modules
package.json
appB
node_modules
package.json
libs
libA
dist
node_modules
package.json
All apps are independents, but they all require libA
I'm running all these apps with docker-compose. My question here is how to handle properly all the dependencies as I don't want the node_modules folders to be synchronized with host.
Locally, when I run yarn install at workspace root, it installs all dependencies for all projects, populating the different node_modules.
In docker-compose, ideally each app should not be aware of others apps.
My approach so far, which is working but not ideal and not very scalable.
version: "3.4"
services:
# The core is in charge of installing dependencies for ALL services. Each service must for wait the core, and then
# just do their job, not having to handle install.
appA:
image: node:14-alpine
volumes: # We must load every volumes for install
- .:/app # Mount the whole workspace structure
- root_node_modules:/app/node_modules
- appA_node_modules:/app/apps/appA/node_modules
- appB_node_modules:/app/apps/appB/node_modules
- libA_node_modules:/app/libs/libA/node_modules
working_dir: /app/apps/appA
command: [sh, -c, "yarn install && yarn run start"]
appB:
image: node:14-alpine
volumes: # We must load every volumes for install
- .:/app # Mount the whole workspace structure
- root_node_modules:/app/node_modules
- appB_node_modules:/app/apps/appB/node_modules
working_dir: /app/apps/appB
command: [sh, -c, "/scripts/wait-for-it.sh appA:4001 -- yarn run start"]
# And so on for all apps....
volumes:
root_node_modules:
driver: local
appA_node_modules:
driver: local
appB_node_modules:
driver: local
libA_node_modules:
driver: local
The main drawbacks I see:
appA is responsible for install dependencies of ALL apps.I would like to avoid a build for development, as it has to be done each time you add a dependency, it's quite cumbersome and it's slowing you down
I believe that in your case, the best thing you should do is to build your own Docker image instead of using the image from node. So, lets do some coding. First of all, you should tell Docker to ignore node_modules folders. In order to do that, you'll need to create a .dockerignore and a Dockerfile for each of your apps. So, your structure might look like this:
workspace_root
node_modules
package.json
apps
appA
.dockerignore
node_modules
Dockerfile
package.json
appB
.dockerignore
node_modules
Dockerfile
package.json
libs
libA
.dockerignore
dist
node_modules
Dockerfile
package.json
In the .dockerignore file, you can repeat the same value below.
node_modules/
dist/
That will make docker ignore those folders during the build. And now to the Dockerfile itself. So, in order to make sure your project runs fine inside your container, the best practice is to build your project in the container, and not outside it. It avoids lots of "works fine in my computer" problems. That said, one example of a Dockerfile could be like this:
# build stage
FROM node:14-alpine AS build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# production stage
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
COPY prod_nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
In that case I used nginx also, to make sure user gets to the container through a proper webserver. At the end I'll let the prod_nginx.conf also. But the point here, is that you can just build that image and send it to dockerhub, and from there, use it in your docker-compose.yml instead of using a raw node image.
Docker-compose.yml would be like this:
version: "3.4"
services:
appA:
image: mydockeraccount/appA
container_name: container-appA
port:
- "8080:80"
....
Now, as promised, the prod_nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer"'
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name _ default_server;
index index.html;
location / {
root /usr/share/nginx/html;
index index.html;
try_files $uri $uri/ /index.html;
}
}
}
Hope it helps. Best regards.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With