It is very common nowadays to develop projects made of multiple services. The most common example is to have at least a backend and a frontend as separate apps. And in more complex projects, we can have many more services, all running in parallel.
A developer often has to run these services simultaneously on their local machine.
The old way to do this is to simply start each service manually in a separate terminal. But this can quickly become cumbersome, as you may have experienced.
Some popular tools like
npm-run-all make this easier, at the cost of adding dependencies. Combined with
yarn workspaces or
lerna, they allow for pretty smooth developer experience.
Thanks to these tools, a developer can type a unique command, for example
yarn dev and have their whole stack with all services started automatically. And a single
CTRL+c in the terminal allows to terminate all services in one single move. Really nice right?
There are however some cons with this approach:
- More complex npm scripts in
- New dependencies added to the project, that need to be maintained
- A sort of mixed concerns, where the project's code is now not only used to build services but also to orchestrate them
- If you use
yarn-workspaces: each service now has to use yarn as well. You get some sort of vendor lock-in that couples your services together. And what if we want different languages per service?
yarn-workspaces in conjunction with
npm-run-all for a while for all my projects, I've switched recently to just using
As I've discovered,
docker-compose can achieve all the above, and way more:
✔️ Running all services concurrently
✔️ No extra npm dependencies with their added complexity: no
yarn workspaces and such
✔️ 100% separated and independent services, just standard apps
✔️ Ability to use a different language for each service, different node versions, or package managers
✔️ A simpler mental model
On top of that, by using not only
docker-compose but also a separate
Dockerfile for each service, then using
docker-compose for orchestration in development, we gain tremendous advantages:
- The ability to use the exact same stack in all environments: development, staging, production,(...) and across the whole CD/CI pipeline.
- Extremely easy replication of the development environment on any machine. A new developer needs only
docker-composeto start working. No more time lost in re-building a dev environment!
- It doesn't matter if your services need different node versions, or ruby, python, clojure, databases, cobol,... Everything can be spun up on a pristine machine with just 2 commands: an initial
docker-compose build, then just a daily
Here's our project structure:
my-app - Readme.md - backend - Dockerfile - package.json - ... - frontend - Dockerfile - package.json - ... - dev - docker-compose.yml
- each service could be in a different language, node version, package manager,...
docker-compose.ymlcould perfectly be in the project's root folder. I just like to create a new
devfolder to group all dev related tools. It also helps clarify to all developers (even myself) that this
docker-compose.ymlfile is just for development use.
backend/Dockerfile is written with the production environment in mind, for example, the instructions
RUN yarn --prod --frozen-lockfile and
CMD [ "yarn", "start" ] are for production, but
docker-compose will allow us later to override some parts locally to meet our development needs.
# backend/Dockerfile ================= # (production-friendly) FROM node:14-alpine WORKDIR /usr/src/app # Copy these files from your host into the image COPY yarn.lock . COPY package.json . # Run the command inside the image filesystem RUN yarn --prod --frozen-lockfile # Copy the rest of your app's source code from your host to the image filesystem: COPY . . # Which port is the container listening on at runtime? # This should be the same port your server is listening to: EXPOSE 8080 # Start the server within the container: CMD [ "yarn", "start" ]
Almost identical to our backend
Dockerfile, also written for production in mind.
docker-compose will allow us to override some instructions locally, just for development.
# frontend/Dockerfile ================= # (production-friendly) FROM node:14-alpine WORKDIR /usr/src/app COPY yarn.lock . COPY package.json . RUN yarn --prod --frozen-lockfile COPY . . EXPOSE 3000 CMD [ "yarn", "start" ]
version: "3" services: backend: build: "../backend" ports: - 8080:8080 command: sh -c "yarn && yarn dev" volumes: - ../backend:/usr/src/app frontend: build: "../frontend" ports: - 3000:3000 command: sh -c "yarn && yarn dev" volumes: - ../frontend:/usr/src/app
Here, while reusing the 2 previously defined
Dockerfile, we are allowed to override certain commands and parameters.
In this case,
command override the values of
volumes allow us to map the frontend and backend folders on our machine to the ones inside the containers. It means that you can now edit the project files normally in your IDE, all changes being reflected instantly inside the containers.
For the first run, in a terminal, just type:
$ cd dev $ docker-compose build
This will download the images defined in the
node:14-alpine) and prepare the whole environment for both frontend and backend.
Note that you need to run this command only once initially, or after modifying a
To run the whole stack and start coding:
$ cd dev $ docker-compose up
From now on, all npm scripts and commands should be executed from within the containers, not on the host machine.
For example, if we want to add the package
classnames to the frontend:
# in a new terminal: $ cd dev $ docker-compose exec frontend yarn add classnames
Phew! This is cumbersome, and a lot of typing, to be honest, don't you think?
Don't worry, we'll see how to make it better in the next section:
Who enjoys long cumbersome typing? No one.
Here's one simple solution: let's add an
aliases.sh file under
my-app - dev - aliases.sh
With the following content:
# my-app/dev/aliases.sh alias be="docker-compose exec backend" alias fe="docker-compose exec frontend"
And let's source it in the current terminal:
$ . dev/aliases.sh
From now on:
# we can type this: $ fe yarn add classnames $ be yarn add -D nodemon # instead of: # $ docker-compose exec frontend yarn add classnames # $ docker-compose exec backend yarn add -D nodemon
To avoid sourcing manually in every terminal, we can also do it once and for all in
# in /home/<USER>/.bashrc # at the very end, just add this line: . /<PATH_TO_MY_APP>/dev/aliases.sh
I would recommend doing this only when working continuously on a project, and removing this new line once it's not needed anymore.
Thanks to the
Dockerfiles (written for production, remember?), we can run our services within the exact same OS and context under all our environments: development, test, staging, production,...
For example, if you use Google Cloud Run, you can now provide it the
Dockerfile for each service, and be assured that if your code runs fine locally, it should also run perfectly once deployed.
For example, it is now very easy to add additional container depending on your projects needs.
Let's say we need a
postgres database in version 11.1 for development. We can just add it to
version: "3" services: backend: build: "../backend" ports: - 8080:8080 command: sh -c "yarn && yarn dev" volumes: - ../backend:/usr/src/app frontend: build: "../frontend" ports: - 3000:3000 command: sh -c "yarn && yarn dev" volumes: - ../frontend:/usr/src/app db: image: postgres:11.1 command: "-c logging_collector=on" restart: always ports: - 5432:5432 environment: POSTGRES_PASSWORD: changeme POSTGRES_USER: changeme POSTGRES_DB: changeme # Let's also provide an admin UI for the postgres # database, often useful during development: adminer: image: adminer restart: always ports: - 5000:8080
We have seen how we can develop multiple concurrently running services, each in any language, with any kind of database, on any machine, without installing any of these on the host machine itself.
We just need to install
docker-compose (and an IDE), and that's it!
With this approach, each service is just a perfectly contained regular app.
Furthermore, we can now run each service within the exact same system (OS) across all environments and all developer's machines.
On-boarding new developers and setting up their development environment can traditionally take days. With this approach, it's a matter of minutes.
It also makes it near instantaneous to switch between different projects written in different languages or language versions.