In today's software development, Docker has become an essential tool for deploying and managing applications. Through Docker, we ensure that our application runs consistently across different environments from development to production. The main benefit of Docker is that it helps run processes in isolated environments
As the heading implies I will be showing how to containerize a MERN (MongoDB, Express.js, React, Node.js) stack application using Docker.
What is Docker?
Docker is an open-source platform that automates the deployment, scaling, and management of applications using containerization.
Containers are a way to package and distribute software applications in a way that makes them easy to deploy and run consistently across different environments. They allow you to package an application, along with all its dependencies and libraries, into a single unit that can be run on any machine with a container runtime, such as Docker.
But why do we even need containers?
Everyone has different operating systems like Windows, Linux, or MacOS. Everyone has different environments in which they work, and the steps to run a project can vary based on the OS. When the project grows it also becomes extremely hard to keep track of dependencies used by the project.
Benefits of Using Containers
- Docker allows us to configure our application in a single file.
- Docker can run in isolated environments.
- Setting up an OS project is easier.
- Installing DBs and auxiliary services is also easier through Docker.
docker run -d -p 27017:27017 mongo
This command starts Mongo in all the operating systems.
An example of what docker does
Docker isn't the only containerization tool, there are many alternatives depending on the usage.
What is MERN?
"MERN" refers to a stack of technologies used for building web applications. The acronym stands for MongoDB
, Express.js
, React
, and Node.js
.
MongoDB - It is a NoSQL database that stores data in JSON-like documents.
Express.js - It is a web application framework for Node.js, designed for building web applications and APIs. It provides a robust set of features that simplifies the development process.
React - Our most popular and widely used frontend framework. It allows developers to create large web applications that can update and render efficiently in response to data changes.
Node.js - You must have heard that the JavaScript runtime is built on Chrome's V8 JavaScript engine. It allows developers to use JavaScript for server-side scripting, running scripts server-side to produce dynamic web page content before the page is sent to the user's web browser.
Now let's get to the main part
For creating a Dockerfile for a MERN app we must know the application's structure and how it runs locally. I am taking the MERN project CompileX I created for the illustration.
It is a real-time compiler built using React
for the frontend
, Node.js
, and Express.js
for the backend
. I have used socket.io
for real-time communication
.
Project Structure
CompileX/
├── src/. //Frontend
│ ├── assets/
│ ├── components/
│ ├── pages/
│ ├── constants/
│ ├── App.css
│ ├── App.jsx
│ ├── index.css
│ ├── main.jsx
│ └── socket.js
|
├── index.html
├── package-lock.json
├── package.json
├── server.js. //Backend
├── tailwind.config.js
├── vite.config.js
├── README.md
Here, src
contains my frontend
, and server.js
contains my backend
.
If I have to start my application locally to run (development) I would have to start my backend using node server.js and frontend using npm run dev (vite configuration). I will also have to install the dependencies required for both of them.
Creating Dockerfile
Your Dockerfile contains the instructions for building a Docker image of your application.
# Use an official Node.js image, based on Alpine Linux, as the base image.
FROM node:18-alpine
# Set the working directory inside the container to /app.
WORKDIR /app
# Copy the package.json and package-lock.json files to the working directory.
COPY package.json package-lock.json ./
# Install the dependencies specified in package.json.
RUN npm install
# Copy all files from the current directory on the host machine to the working directory in the container.
COPY . .
# Build the application. This assumes there's a build script in package.json.
RUN npm run build
# Expose port 3000 on the container, so it can be accessed from the host.
EXPOSE 3000
# Define the command to run when the container starts. This runs the server in production mode.
CMD ["npm", "run", "server:prod"]
Let's go through the Dockerfile
-
Base Image - I have taken the node:18-alpine image from the docker hub. It will download the node inside the container.
node:18-alpine
is a lightweight Node.js image using Alpine Linux. You can find more about it on the docker hub and you can use the image required according to your project. - WORKDIR - Sets /app as the working directory inside the container.
-
COPY package.json and package-lock.json - Copies package.json and package-lock.json into the container. This helps in leveraging Docker's cache for dependencies. You can also use
package*.json ./
. - RUN npm install - Installs the Node.js dependencies inside the container.
- COPY . . - It copies all files from the current directory on the host to the /app directory in the container.
- RUN npm run build - This command builds our application. Typically used for building the front-end assets.
- EXPOSE 3000 - it exposes port 3000, which is used by the application.
-
CMD - Specifies the command to run the application in production mode using npm run
server:prod
.
Building & Running the Dockerfile
docker build -t compile-x:v1 .
docker run -d -p 3000:3000 compile-x:v1
- build - After creating Dockerfile we build the docker image for our application. Here I am creating a compile-x image and I have given it a tag v1.
-
run - To run the image we use
docker run <image name>
. I am running my docker image in detach mode and I have also given used port mapping. -d fordetach mode
-p forport mapping
.
To check the status of the container run
docker ps
Creating docker-compose.yml
# Define the version of Docker Compose file format.
version: '3.8'
# Define the services (containers) to be run.
services:
app:
# Build the Docker image for the app service using the Dockerfile in the current directory.
build:
context: .
dockerfile: Dockerfile
# Map port 3000 of the container to port 3000 of the host machine.
ports:
- "3000:3000"
# Mount the current directory (.) on the host machine to /app in the container.
volumes:
- .:/app
# Set environment variables for the container.
environment:
- NODE_ENV=development
Building & Running docker-compose.yml
docker-compose up --build
docker-compose up
- build - This command builds the Docker image and starts the container as defined in your docker-compose.yml file
Note - You only need to run it only one time
-
run - To run the docker-compose run
docker-compose up
after building it
-
access - Once the containers are running, we can access our application at
http://localhost:3000
.
Conclusion
By using Docker and Docker Compose, we have created an environment where our MERN stack application can run consistently across different systems. The Dockerfile specifies how to build the image, while docker-compose.yml defines how to configure and run the container. This setup is beneficial for both development and deployment, ensuring that your application behaves the same way regardless of where it's run.
Top comments (0)