Using Docker for Node.js in Development and Production

Alex Barashkov on January 17, 2019

My current primary tech stack is Node.js/Javascript and, like many teams, I moved our development and production environments in to Docker containe... [Read Full]
markdown guide

I'd suggest replacing npm install with npm ci for faster builds in your node Dockerfile ๐Ÿค”


Tried npm ci and got the bug which has not fixed yet So can't use it. Tested it on simple configuration - works well, but because of the bug can't use it with unified dev/prod configs. Will wait once they fix it and then test it properly. Especially weird that PR is already submitted with the fix, but nobody is even replied about the plans of merging it.


Huh, that's an annoying bug.

Why would you want to have node_modules as a volume though? ๐Ÿค”

When you mount app to the container it overrides completely destination folder, so your installed during build modules will be vanished. I want to keep them so use that hack to exclude node modules folder. Did not find any better solution for the time being.

I get that (we usually even add node_modules to .dockerignore to evade cross-platform compat issues). I'm just not entirely sure why you'd want to have node_modules as a volume since you run npm install during image build anyway. Am I missing something? ๐Ÿค”

.dockerignore only works on copy/add command during build time. But when you mount a folder it will override everything which were copied/installed to the container during the build.

it gives you 3 options:

  • install also modules at your local machine and they will present on container after mount. don't like it, you will also rely on your machine set up and cross platform problems will appear
  • use a hack described here and prevent overriding
  • install node modules in a custom directory that also described by the link above

But you are using COPY in the example Dockerfiles in the article - that's what confuses me ๐Ÿ˜…
Or are you talking about using pre-built docker image for development using local code? Then it makes sense, but the whole approach is indeed quite cumbersome ๐Ÿค”

Goal: get a Dockerfile which fit for development on local machine.
Requirements: App should not rely on anything at your local machine despite of Docker installation and the app code.

For node.js app you need to have installed node_modules. So we need install it somewhere and it comes to the 3 points in the previous comment.

So, we happy to do npm install in Dockerfile because that good for both development and production environments. By default node_modules installs at the same as your app directory folder in our case /usr/src/app/node_modules. Modules installed during the build. Then because development on local machine requires that your changes in the code reflect on the app inside docker we mount our local folder with the app(where we don't have node_modules) to the container. It overrides the /usr/src/app in the container and app will not start without node_modules. To use node_modules which were installed during the build-time, there is a hack of using volume as described in stack overflow.

Ah, I finally get it! ๐Ÿ˜…
Thanks for the detailed explanation!


Thanks a lot, that's why I'm writing articles :) because it's possible to get a feedback. Never heard about npm ci, reading about it now and going to check it over the weekend.


Hi Alex, thanks for the excellent article.

I am developing something similar at work and I have a question regarding docker compose and shared volumes that I hope you could help me with.

Basically I designed the Docker Environment so the web application was split up between code and a proxy server (nginx).

The container holding the code creates a shared volume, and then the container running Nginx serves its contents.

I made it this way so it would be easier in the future to replace Nginx with other servers (e.g. Apache).

Now my question is: do you think it is appropriate to initialize the container holding the code as a service in the docker-compose file? Its purpose is only to create the shared volume (it stops immediately after that).

I am sorry if this comes across as a very noob question but I didn't find anything against or in favor of this approach.

Thank you,


Hi Gabriel,

I'm not quite sure that I understood what's exactly in your service. For example if it's something like webpack/gulp website you build and then use that built data as a part of a nginx container I don't see any problem with that.

I also have in my microservices docker compose file for one project, one service which I execute with empty command, because I have to built and then use some commands through it via docker-compose run


That's exactly it. It is a container that only compiles the code via webpack/grunt.



If someone gets the following error on a SELinux-enabled machine (such as Fedora GNU/Linux):

Error: EACCES: permission denied, scandir '/usr/src/app'
example-service_1 | (node:1) UnhandledPromiseRejectionWarning: TypeError: Cannot read property 'get' of undefined

change this:

      - .:/usr/src/app

to this:

      - .:/usr/src/app:z

This took some time to figure out, be sure to thank Stack Overflow ;)


Oh. My. God.

Thank you. I was close to literally pulling my hair out.


Although I agree Docker-Compose is the best local orchestration, Kubernetes reigns supreme for container orchestration. You should give that a shot next if you haven't already. Will make your deployments so much easier.


We already use Kubernetes at productoin, docker-compose for development envs. Kubernetes now is another trend which hard to avoid.


Could you follow up with an article on Kubernetes? That would be awe because you explained things really well. Docker looks so easy, but itโ€™s not and i learned lots from you tonight. Super appreciate all your effort!


Nice article Alex. Good to see other people care about environmental parity / Docker is not just for production. A couple points to share:

"...Replace CMD with the command for running your app without nodemon..." Checkout this article concerning ENTRYPOINT vs CMD. I found it super helpful, especially when writing my own images and need to change the execution command.

I look forward to your next article, keep up the good work!


Link doesnโ€™t work and I dug for the article on my iPad and couldnโ€™t find it either. Any suggestions or alternatives?


Hi Alex, great stuff, I've been working on something similar in my company for a quite while. Wanted to ask you one more thing on subject of your article. Could you get all Win, Linux & Mac based developers to use your docker based dev environment?


Thanks for the article. On my own, I'm already using Docker that way but I still didn't figure out the best way to have the node_modules folder available on the host and having my IDE working with for the autocomplete and more. (For TypeScript, for example, it's better to get the type from packages)

So I found a way during the install process, I have to install on my own the package locally but both node_modules could be different if my node version is different from my machine and the container, so it's already an issue here... And I know it's not how Docker is designed for but in this case, it could really be nice to have the files available.

Any idea? :)


Hi Alex,
I went through your tutorial, and all the steps went well. However, I got into a problem. When I build my production docker-compose file before the development docker-compose file, the app image could not find nodemon in it. If I build the development before the production, all the development modules are available in app image, and nodemon is available as well. Is it supposed to be so? Or did I miss something?
And another question, how to you install new dependencies in your images?


Hi, thanks for to post, in windows nodemon no working, you have propagation file change with nodemon --legacy-watch src/index.js


Hey Alex, thank you for the article, it's helped me get up and running with node/docker better than any other article so far. I'm brand new to Docker so a lot of this is still kind of confusing to me. Was hoping to get a few questions answered.

1) If using yarn, does it need to be installed into the Docker container first? I switched out npm with yarn in the examples you gave and it worked fine, but I don't know if it's just because I have yarn installed globally on my pc.

2) I don't really get the concept of having a Dockerfile (which you said we're supposed to set up best for both prod and dev environments) and a docker-compose file. If the docker-compose is used for dev, why does the dockerfile have to be configured for dev? I don't really understand when and how each of them are used relative to eachother.

3) While developing, do you have to continually rebuild the image as you add dependencies?

Thank you for your time and for the article, much appreciated!


Hey @ohryan
1) yarn is the part of node docker image, that why it works for you
2) Actually I'm proposing try to unify Dev and prod dockerfiles if that's possible. In most of projects I worked in, they could be easily be the same
3) That's only one downside, after changing you dependencies you have to rebuild the image. Fortunately it mostly actively happens at the beginning of the project. But it always depends on your docker compose configuration, for example in my example goal was to make sure we only rely on docker on local machine, but with small changes you could change that approach to installation of node modules on local machine and then use them with docker.


My experience with this on larger projects is that the file share between OSX and the docker VM is too slow for development. You'll probably have to change of file share at some point. To solve this I ended up installing docker in a vagrant VM and using nfs (nfs server running from the Linux VM) to share the files.


I find out that using nodemon inside the container works but slow on every change.
I can use it anyway but it's slow, how do you work with that?
Locally nodemon shines.

code of conduct - report abuse