This week, I continued my open source journey by making a meaningful contribution to yet another project.
My last pull request was for a UI bug fix in ChatCraft - a developer oriented ChatGPT clone. It was part of the 5 contributions I made for Hacktoberfest during the month of October.
Since I am currently taking a cloud computing course at school, there's a lot we learn each week and to be honest, there is not enough time to practice each technology before we move to the next one. Which is fair, because cloud is huge, and in one semester, there is just enough time to scratch the surface.
I started my last set of contributions by adding a Continuous Integration pipeline to a Javascript Project, so I could practice my newly acquired skill. Following the same principle, I decided to Dockerize a project this time, which would be a step up from adding a basic CI pipeline
.
Table of Contents
1. The Project
2. Adding Dockerfiles 🚀
2.1. Frontend
a. Stage 1
b. Stage 2
c. Stage 3
2.2. Server
a. Stage 1
b. Stage 2
3. Setting up docker-compose
4. Pull Request(s)
5. Conclusion 🎇
6. Attributions
The Project
After searching for hours and hours, I finally found an issue requesting to Dockerize the application.
It was a full-stack social media application called ChatRoom, with a React Frontend and an Express.js Backend. Even though the project was fairly new, and not super complex, it was exactly what I was looking for this time.
This is what the issue asked for
Therefore, the first step was to add Dockerfiles for both Frontend and the Backend.
Adding Dockerfiles 🚀
I started out by setting up and running the projects manually to understand the environment and library requirements for both projects.
This wasn't particularly hard as I was already pretty comfortable with both React and Express. As soon as I could run both of them locally, it was time to start writing the Dockerfiles that would be used to build Docker Images.
Frontend
The very first step I take whenever adding a Dockerfile, is to add a .dockerignore file to avoid transferring unnecessary files to the build context.
node_modules/
build/
.gitignore
Let's jump at the actual Dockerfile now.
I made sure to follow most of the best practices I recently learnt, and wrote a multi-stage build to end up with the smallest possible final image.
Stage 1
In the first stage, I used a heavier base image to make sure that any utilities needed while installing the dependencies were available.
Note: I am starting my stage count from 0 in the Dockerfile
comments.
###################################################################################################################
### STAGE 0: Install Dependencies ###
FROM node:20.9.0-bullseye@sha256:33e271b0c446d289f329aedafc7f57c41b3be1429956a7d1e174cfce10993eda AS dependencies
WORKDIR /app
# Copy package files
COPY package.json package-lock.json ./
# Clean install node dependencies defined in package-lock.json excluding dev dependencies
RUN npm ci
###################################################################################################################
Stage 2
In the next stage, I copied all the dependencies installed in node_modules
from previous stage, transferred the source code from build context, and built the application in the last layer. This would bundle the application in an /app/build/
directory, ready to be served to the users.
### STAGE 1: Build ###
FROM node:20.9.0-bullseye@sha256:33e271b0c446d289f329aedafc7f57c41b3be1429956a7d1e174cfce10993eda AS build
WORKDIR /app
# Copy the generated dependenices
COPY --from=dependencies /app /app
# Copy source code
COPY . .
# Build the application
RUN ["npm", "run", "build"]
###################################################################################################################
Stage 3
In the final stage, I deployed the application using nginx, a production grade web server with various optimizations applied, including efficient caching.
###################################################################################################################
### STAGE 2: Deploy ###
FROM nginx:1.24.0-alpine@sha256:62cabd934cbeae6195e986831e4f745ee1646c1738dbd609b1368d38c10c5519 as deploy
# Copy bundled artifacts into nginx serve directory
COPY --from=build /app/build/ /usr/share/nginx/html
# Nginx uses the port 80
EXPOSE 80
# Add a healthcheck layer
HEALTHCHECK --interval=10s --timeout=10s --start-period=10s --retries=3 \
CMD curl --fail localhost:80 || exit 1
###################################################################################################################
And with that, I was able to build the image
docker build -t test-chatroom-ui .
and spin up a container with
docker run --rm -dit -p 3000:80 --name test-container-ui test-chatroom-ui
Server
Now let's take a look at the corresponding files for the Backend. You might have guessed, the first thing to do was adding a .dockerignore
file, which was identical to the one for UI.
.dockerignore
node_modules/
build/
.gitignore
However, the Dockerfile
would be different since the environment and server requirements vary.
This time, I only had 2 stages as I used the node server instead of nginx.
Stage 1
The first stage, like last time, was to transfer the package.json
and package-lock.json
files from the build context and install required dependecies.
# Stage 0: Install Dependencies
# Pick a fatter 'node' base image
# This will have everything necessary to run node, pre-installed when we run the container
FROM node:20.9.0-bullseye@sha256:88893710dcd8be69730988c171d4a88ee71f14338b59a9a23a89ff22cecd6559 AS dependencies
# Use /app as our working directory
WORKDIR /app
# Copy the package.json and package-lock.json
# from the build context into the image
COPY package.json package-lock.json ./
# Clean install node dependencies defined in package-lock.json excluding dev dependencies
RUN npm ci --only=production
###################################################################################################################
Stage 2
The second and the final stage focusd on running the application and serving it on port expected by the project maintainer.
# Stage 1: Run and serve the application
# Use a thinner version of os for the final image
FROM node:20.9.0-alpine@sha256:d18f4d9889b217d3fab280cc52fbe1d4caa0e1d2134c6bab901a8b7393dd5f53 AS run
# Use /app as our working directory
WORKDIR /app
# Copy the generated dependenices
COPY --from=dependencies /app /app
# Copy src to /app/src/
COPY . .
# Environment variables
ENV MONGO_URL='secret'
ENV JWT_SECRET='secret'
ENV PORT=3001
# Build Args
ARG PORT=3001
# Start the container by running our server
# This is the default command that is run whenever a container is run using an image.
# It can be overridden by providing the custom command as the second positional argument.
CMD ["npm", "start"]
EXPOSE ${PORT}
# Add a healthcheck layer
HEALTHCHECK --interval=30s --timeout=30s --start-period=10s --retries=3\
CMD curl --fail localhost:${PORT} | exit 1
###################################################################################################################
You'll notice I have the secrets exposed in a file that will go to Github, which is bad, but we can fix that in later iterations (The maintainer had already exposed them).
You'll also notice a Healthcheck Layer which I forgot to discuss about for the UI. What it is doing is fire a request to localhost:3001 every 30s, and report the status of container (healthy, unhealthy, or waiting) based on the response with a maximum of 3 retries.
To spin up a container, the steps were identical to last time and everything worked perfect.
Setting up docker-compose
Now that I was able to build images and run containers manually, it was time automate the process of spinning up and tearing down the containers. You might have noticed that to run a docker container, we need to type super long commands each time, which can get pretty annoying over time.
From the official documentation,
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application's services. Then, with a single command, you create and start all the services from your configuration.
And for our application, we need to configure 2 services. One would be the React Frontend and another would be Express Backend.
Here's the configuration I came up with
services:
server:
container_name: server
build: ./server
ports:
- '3001:3001'
client:
container_name: client
build: ./client
ports:
- '3000:80'
And with that, I was able to spin up containers simply using
docker-compose up
Notice the logs from both client and server. We can also spin them in detached mode like we did in the manual command using
docker-compose up -d
And to tear them down
docker-compose down
Pull Request
I opened the following pull request after testing locally, and optimizing image builds.
https://github.com/danielmarv/ChatRoom/pull/5.
Which was merged by next morning.
Apart from this, I had also opened another pull request to a project which wasn't active for a while.
https://github.com/leodube/PenaltyBias/pull/44#issuecomment-1821842350
I don't expect this one to get merged anytime soon. I forgot to check the project activity before contributing.
Conclusion 🎇
In this post, I shared my experience Dockerizing a couple of open source projects. Not only I was able to add value to the community, but also got to practice writing Dockerfiles and working with docker-compose, which I learnt recently.
All in all, it was a pretty worthwhile experience and I'll be following up with another similar post soon.
In the meantime, stay tuned!
Attributions
Cover Image by Alexander Fox | PlaNet Fox from Pixabay
Top comments (0)