My current primary tech stack is Node.js/Javascript and, like many teams, I moved our development and production environments in to Docker containers. However, when I started to learn Docker, I realized that most articles focused on development or production environments and could find nothing about how should you organize your Docker configuration to be flexible for both cases.
In this article, I demonstrate different use cases and examples of Node.js Dockerfiles, explain the decision making process, and help envision how your flow should be using Docker. Starting with a simple example, we then review more complicated scenarios and workarounds to keep your development experience consistent with or without Docker.
Disclaimer: This guide is large and focused on different audiences with varying levels of Docker skills; at some points, the instructions stated will be obvious for you, but I will try to make certain relevant points alongside them in order to provide a complete vision of the final set up.
Prerequisites
Described cases
- Basic Node.js Dockerfile and docker-compose
- Nodemon in development, Node in production
- Keeping production Docker image away from devDependecies
- Using multi-stage build for images required node-gyp support
Add .dockerignore file
Before we start to configure our Dockerfile, let’s add a .dockerignore file to your app folder. The .dockerignore file excludes during the COPY/ADD command files described in the file. Read more here
node_modules
npm-debug.log
Dockerfile*
docker-compose*
.dockerignore
.git
.gitignore
README.md
LICENSE
.vscode
Basic Node.js Dockerfile
To ensure clear understanding, we will start from basic Dockerfile you could use for simple Node.js projects. By simple, I mean that your code does not have any extra native dependencies or build logic.
FROM node:10-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
CMD [ "npm", "start" ]
You will find something like this in every Node.js Docker article. Let’s briefly go through it.
WORKDIR /usr/src/app
The workdir is sort of default directory that is used for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions. In some articles you will see that people do mkdir /app and then set it as workdir, but this is not best practice. Use a pre-existing folder/usr/src/app that is better suited for this.
COPY package*.json ./
RUN npm install
Here’s another best practice adjustment: Copy your package.json and package-lock.json before you copy your code into the container. Docker will cache installed node_modules as a separate layer, then, if you change your app code and execute the build command, the node_modules will not be installed again if you did not change package.json. Generally speaking, even if you forget to add those line, you will not encounter a lot of problems. Usually, you will need to run a docker build only when your package.json was changed, which leads you to install from scratch anyway. In other cases, you don’t run docker build too often after your initial build in the development environment.
The moment when the docker-compose comes in
Before we start to run our app in production, we have to develop it. The best way of orchestrating and running your docker environment is using docker-compose. Define a list of containers/services you want to run and instructions for them in an easy to use syntax for further running in a YAML file.
version: '3'
services:
example-service:
build: .
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
ports:
- 3000:3000
- 9229:9229
command: npm start
In the example of a basic docker-compose.yaml configuration above, the build done by using Dockerfile inside your app folder then your app folder is mounted to the container and node_modules that are installed inside the container during the build will not be overridden by your current folder. The 3000 port is exposed to your localhost, assuming that you have a web server running. 9229 is used for exposing the debug port. Read more here.
Now run your app with:
docker-compose up
Or use VS code extension for the same purpose.
With this command, we expose 3000 and 9229 ports of the Dockerized app to localhost, then we mount the current folder with the app to /usr/src/app and use a hack to prevent overriding of node modules from the local machine through Docker.
So can you use that Dockerfile in development and production?
Yes and no.
Differences in CMD
First of all, usually you want your development environment app reloading on a file change. For that purpose, you can use nodemon. But in production, you want to run without it. That means your CMD(command) for development and production environments have to be different.
There are few different options for this:
1. Replace CMD with the command for running your app without nodemon, which can be a separate defined command in your package.json file, such as:
"scripts": {
"start": "nodemon --inspect=0.0.0.0 src/index.js",
"start:prod": "node src/index.js"
}
In that case your Dockerfile could be like this:
FROM node:10-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
CMD [ "npm", “run”, "start:prod" ]
However, because you use docker-compose file for your development environment, we can have a different command inside, exactly as in the previous example:
version: '3'
services:
### ... previous instructions
command: npm start
2. If there is a bigger difference or you use docker-compose for development and production, you can create multiple docker-compose files or Dockerfile depending on your differences. Such as docker-compose.dev.yml or Dockerfile.dev.
Managing packages installation
It’s generally preferable to keep your production image size as small as possible and you don’t want to install node modules dependencies that are unnecessary for production. Solving this issue is still possible by keeping one unified Dockerfile.
Revisit your package.json file and split devDependencies apart from dependencies. Read more here. In brief, if you run your npm install with --production flag or set your NODE_ENV as production, all devDependencies will not be installed. We will add extra lines to our docker file to handle that:
FROM node:10-alpine
ARG NODE_ENV=development
ENV NODE_ENV=${NODE_ENV}
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
CMD [ "npm", “run”, "start:prod" ]
To customize the behaviour we use
ARG NODE_ENV=development
ENV NODE_ENV=${NODE_ENV}
Docker supports passing build arguments through the docker command or docker-compose. NODE_ENV=development will be used by default until we override it with different value. The good explanation you could find here.
Now when you build your containers with a docker-compose file, all dependencies will be installed, and when you are building it for production, you can pass build argument as production and devDependencies will be ignored. Because I use CI services for building containers, I simply add that option for their configuration. Read more here
Using multi-stage build for images requiring node-gyp support
Not every app you will try to run in Docker will exclusively use JS dependencies, some of them require node-gyp and extra native installed os libraries to use.
To help solve that problem we can use multi-stage builds, which help us to install and build all dependencies in a separate container and move only the result of the installation without any garbage to the final container. The Dockerfile could look like this:
# The instructions for the first stage
FROM node:10-alpine as builder
ARG NODE_ENV=development
ENV NODE_ENV=${NODE_ENV}
RUN apk --no-cache add python make g++
COPY package*.json ./
RUN npm install
# The instructions for second stage
FROM node:10-alpine
WORKDIR /usr/src/app
COPY --from=builder node_modules node_modules
COPY . .
CMD [ "npm", “run”, "start:prod" ]
In that example, we installed and compiled all dependencies based on the environment at the first stage,then copied the node_modules in a second stage we will use in the development and production environment.
The line RUN apk --no-cache add python make g++
may be different from project to project, likely because you will need extra dependencies.
COPY --from=builder node_modules node_modules
In that line, we copy a node_modules folder from first stage to a node_modules folder in second stage. Because of this, in the second stage, we set WORKDIR as /usr/src/app the node_modules will be copied to that folder.
Summary
I hope this guide helped you understand how to organize your Dockerfile and have it serve your needs for both development and production environments. We can sum up our advice as follows:
- Try to unify your Dockerfile for dev and production environments; if it does not work, split them.
- Don’t install dev node_modules for production builds.
- Don’t leave native extension dependencies required for node-gyp and node modules installation in the final image.
- Use docker-compose to orchestrate your development setup.
- It's up to you what to choose for orchestration your Docker containers in production, it could be docker-compose, Docker Swarm or Kubernetes.
Top comments (38)
I'd suggest replacing
npm install
withnpm ci
for faster builds in your node Dockerfile 🤔Tried npm ci and got the bug which has not fixed yet github.com/npm/npm/issues/21007. So can't use it. Tested it on simple configuration - works well, but because of the bug can't use it with unified dev/prod configs. Will wait once they fix it and then test it properly. Especially weird that PR is already submitted with the fix, but nobody is even replied about the plans of merging it.
Huh, that's an annoying bug.
Why would you want to have
node_modules
as a volume though? 🤔When you mount app to the container it overrides completely destination folder, so your installed during build modules will be vanished. I want to keep them so use that hack to exclude node modules folder. Did not find any better solution for the time being.
I get that (we usually even add
node_modules
to.dockerignore
to evade cross-platform compat issues). I'm just not entirely sure why you'd want to havenode_modules
as a volume since you runnpm install
during image build anyway. Am I missing something? 🤔.dockerignore only works on copy/add command during build time. But when you mount a folder it will override everything which were copied/installed to the container during the build.
it gives you 3 options:
But you are using
COPY
in the example Dockerfiles in the article - that's what confuses me 😅Or are you talking about using pre-built docker image for development using local code? Then it makes sense, but the whole approach is indeed quite cumbersome 🤔
Goal: get a Dockerfile which fit for development on local machine.
Requirements: App should not rely on anything at your local machine despite of Docker installation and the app code.
For node.js app you need to have installed node_modules. So we need install it somewhere and it comes to the 3 points in the previous comment.
So, we happy to do npm install in Dockerfile because that good for both development and production environments. By default node_modules installs at the same as your app directory folder in our case /usr/src/app/node_modules. Modules installed during the build. Then because development on local machine requires that your changes in the code reflect on the app inside docker we mount our local folder with the app(where we don't have node_modules) to the container. It overrides the /usr/src/app in the container and app will not start without node_modules. To use node_modules which were installed during the build-time, there is a hack of using volume as described in stack overflow.
Ah, I finally get it! 😅
Thanks for the detailed explanation!
Thanks a lot, that's why I'm writing articles :) because it's possible to get a feedback. Never heard about npm ci, reading about it now and going to check it over the weekend.
Any idea why my
node-modules
in empty? 🤔 It annoys my IDE a lot 😅Edit: fixed it with this approach => stackoverflow.com/a/61137716/8966651
Same problem here
Thanks. This is what I was looking for. Having node_modules also available on the host is essential for development.
Thanks for the article. On my own, I'm already using Docker that way but I still didn't figure out the best way to have the node_modules folder available on the host and having my IDE working with for the autocomplete and more. (For TypeScript, for example, it's better to get the type from packages)
So I found a way during the install process, I have to install on my own the package locally but both node_modules could be different if my node version is different from my machine and the container, so it's already an issue here... And I know it's not how Docker is designed for but in this case, it could really be nice to have the files available.
Any idea? :)
The only way I've found to solve that => stackoverflow.com/questions/510976...
Hi, great article! Thanks for sharing your knowledge with us.
I have a doubt about best practices for handling environment variables.
Reading your Dockerfile, i felt a little strange seeing
NODE_ENV=development
in builder andstart:prod
later on. I would expect one environment to be the 'default' and the other one would have to override what is needed. But I didn't see it in here, is the default one 'development' or 'production'?I understand
npm install
works based on NODE_ENV variable, so whether development or production, it will work as expected.Is there a similar solution for
npm start
? To run the correct command based on NODE_ENV variable?you can have your npm scripts like :
start:development
andstart:production
In the Dockerfile, you can use
CMD [ "npm", “run”, "start:${NODE_ENV}" ]
.Hi, I have the same doub, if you have found a solution, please shared it, I would appreciate.
If someone gets the following error on a SELinux-enabled machine (such as Fedora GNU/Linux):
change this:
to this:
This took some time to figure out, be sure to thank Stack Overflow ;)
Oh. My. God.
Thank you. I was close to literally pulling my hair out.
Hi Alex, thanks for the excellent article.
I am developing something similar at work and I have a question regarding docker compose and shared volumes that I hope you could help me with.
Basically I designed the Docker Environment so the web application was split up between code and a proxy server (nginx).
The container holding the code creates a shared volume, and then the container running Nginx serves its contents.
I made it this way so it would be easier in the future to replace Nginx with other servers (e.g. Apache).
Now my question is: do you think it is appropriate to initialize the container holding the code as a
service
in thedocker-compose
file? Its purpose is only to create the shared volume (it stops immediately after that).I am sorry if this comes across as a very noob question but I didn't find anything against or in favor of this approach.
Thank you,
Gabriel
Hi Gabriel,
I'm not quite sure that I understood what's exactly in your service. For example if it's something like webpack/gulp website you build and then use that built data as a part of a nginx container I don't see any problem with that.
I also have in my microservices docker compose file for one project, one service which I execute with empty command, because I have to built and then use some commands through it via docker-compose run
That's exactly it. It is a container that only compiles the code via webpack/grunt.
Thanks!
Although I agree Docker-Compose is the best local orchestration, Kubernetes reigns supreme for container orchestration. You should give that a shot next if you haven't already. Will make your deployments so much easier.
We already use Kubernetes at productoin, docker-compose for development envs. Kubernetes now is another trend which hard to avoid.
Could you follow up with an article on Kubernetes? That would be awe because you explained things really well. Docker looks so easy, but it’s not and i learned lots from you tonight. Super appreciate all your effort!
Hi Alex, great stuff, I've been working on something similar in my company for a quite while. Wanted to ask you one more thing on subject of your article. Could you get all Win, Linux & Mac based developers to use your docker based dev environment?
Nice article Alex. Good to see other people care about environmental parity / Docker is not just for production. A couple points to share:
"...Replace CMD with the command for running your app without nodemon..." Checkout this article concerning ENTRYPOINT vs CMD. I found it super helpful, especially when writing my own images and need to change the execution command.
I look forward to your next article, keep up the good work!
Link doesn’t work and I dug for the article on my iPad and couldn’t find it either. Any suggestions or alternatives?
Hi Alex,
I went through your tutorial, and all the steps went well. However, I got into a problem. When I build my production docker-compose file before the development docker-compose file, the app image could not find nodemon in it. If I build the development before the production, all the development modules are available in app image, and nodemon is available as well. Is it supposed to be so? Or did I miss something?
And another question, how to you install new dependencies in your images?
Some comments may only be visible to logged-in visitors. Sign in to view all comments.