loading...

Crafting multi-stage builds with Docker in Node.js

ganeshmani profile image GaneshMani Originally published at cloudnweb.dev ・5 min read

Docker has became an inevitable tool for development. Everyday developers face new challenges for containerizing their applications.One of the important problems are containerizing the application for different enviroments. Crafting multi-stage builds with Docker in Node.js.

you can ask me, why do we need multi-stage build in node.js applications.why can't we just build a single stage image and deploy it to server ?.

we are in a development era where Compiler and transpiler plays a vital role. especially, in javascript development environment. For Example, TypeScript and Babel.

Before multi-stage builds

Before the concept of multi-stage builds, application will have two Dockerfile. one is for development and another one is for production. this has been referred as a builder pattern. But maintaining two Dockerfiles is not ideal.

Dockerfile.dev

FROM node:10.15.2-alpine
WORKDIR /usr/src/app
COPY package.json ./
RUN npm install
COPY . .
RUN npm run dev

Dockerfile

FROM node:10.15.2-alpine
WORKDIR /usr/src/app
COPY package.json ./
RUN npm install
COPY /usr/src/app/dist ./dist
EXPOSE 4002
CMD npm start

Although, it solves the problem of development and production image builds. this will be expensive in the long run.it will take lots of spaces in local disk as well.

Using multi-stage builds

multi-stage build combines different environment Dockerfile into one to create a production build. For example, Staging build creates a compiled version of application source code and final build contains the compiled version deployed in the image container.

Let's see with an example of building muti-stage for Node.js with babel and MongoDB. Complete Source is available in this repository

Create a directory and initialize the Application with Express and Babel.

Staging Build

FROM node:10.15.2-alpine
WORKDIR /usr/src/app
COPY package.json ./
COPY .babelrc ./
RUN npm install
COPY ./src ./src
RUN npm run build

Above command, takes the node:10.15.2-alpine as base image and copies all the source code along with babel config. it builds the compiled code and stores it in dist folder in a container.

Final Build

FROM node:10.15.2-alpine
WORKDIR /usr/src/app
COPY package.json ./
COPY .babelrc ./
RUN npm install
COPY --from=0 /usr/src/app/dist ./dist
EXPOSE 4002
CMD npm start

This command, takes the compiled version from previous staging build and stores it in a new image container. the magic happens in line COPY --from=0

COPY --from=0 line copies just the build artifacts from previous stage into the new stage.

Naming build stages

Instead of referring build stages with a Number, you can name them and use it for reference. For Example, we specify the stage build with the name appbuild

Staging Docker Build

FROM node:10.15.2-alpine AS appbuild
WORKDIR /usr/src/app
COPY package.json ./
COPY .babelrc ./
RUN npm install
COPY ./src ./src
RUN npm run build

Final docker Build

we refer the previous build with COPY --from=appbuild

FROM node:10.15.2-alpine
WORKDIR /usr/src/app
COPY package.json ./
COPY .babelrc ./
RUN npm install
COPY --from=appbuild /usr/src/app/dist ./dist
EXPOSE 4002
CMD npm start

Complete Dockerfile

# Build Stage 1
# This build created a staging docker image 
#
FROM node:10.15.2-alpine AS appbuild
WORKDIR /usr/src/app
COPY package.json ./
COPY .babelrc ./
RUN npm install
COPY ./src ./src
RUN npm run build

# Build Stage 2
# This build takes the production build from staging build
#
FROM node:10.15.2-alpine
WORKDIR /usr/src/app
COPY package.json ./
COPY .babelrc ./
RUN npm install
COPY --from=appbuild /usr/src/app/dist ./dist
EXPOSE 4002
CMD npm start

Once, we complete the dockerfile. create docker compose to link multiple containers together.

Create a docker-compose.yml file and add the following,

version: "3"

services: 
  app:
    container_name: app
    restart: always
    build: .
    environment: 
      - PORT=4002
    ports: 
      - "4002:4002"
    links:
      - mongo
  mongo:
    container_name: mongo
    image : mongo
    volumes: 
      - ./data:/data/db
    ports: 
      - "27017:27017"        

After that, Run the docker compose with the command,

docker-compose up

you can see the docker builds an intermediate image and use it to build a final one. Once it builds a final one,Docker deletes the intermediate images.

Complete Source code is available in this repository

Posted on by:

ganeshmani profile

GaneshMani

@ganeshmani

Full Stack Engineer. Currently focusing on Javascript, React, GraphQL, and Nodejs.

Discussion

markdown guide
 

Using a process manager inside Docker is a bad practice.

Docker is made to containerize processes, and is a process manager in itself.

You can configure your Docker instance to handle container crashes differently for your container, and should try to avoid relying on an internal process manager.

 
 

The Docker Integration paragraph lists features that are all supported by production docker infrastructures such as Kubernetes or Swarm.

The process duplication part is especially concerning here, as the very purpose of Docker is to isolate processes and allow them to run solo and independent.

By pm2's docs (you linked), they're re-coding an in-docker compose / kube / swarm system.

I get the point. you may not be in favour of using pm2 inside Docker but sometimes the application requirements are different and you may need to run two nodejs application in one docker container, so in case if you want to run frontend and backend application in the same container then in the case pm2 work well than other workarounds.

Anyway, you are right..we don't need pm2 in docker in most of the circumstances. i will change it..thanks for the heads up

the application requirements are different and you may need to run two nodejs application in one docker container

That's generally due to bad application design, and a serious misuse of Docker.