Creating and deploying Docker containers for a web application can greatly simplify development, testing, and deployment processes. In this guide, we’ll walk through the steps to build and deploy Docker containers for a simple web application using Node.js, PostgreSQL as the database, and Nginx as a reverse proxy. By the end of this post, you’ll understand how to structure your project, write Dockerfiles, and use Docker Compose to manage your containers effectively.
Prerequisites
Before we start, ensure that you have Docker installed on your system. You can download and install Docker from the official Docker website.
Project Structure
For the sake of clarity, let’s organize our project as follows:
your_project/
├── docker-compose.yml
├── app/
│ ├── node_modules
│ ├── app.js
│ ├── package.json
│ ├── package-lock.json
│ ├── .dockerignore
│ ├── Dockerfile
├── nginx/
│ ├── Dockerfile
│ ├── nginx.conf
In this structure:
your_project is the root directory of your project.
app is the directory containing your Node.js application code.
nginx is the directory containing Nginx configuration.
docker-compose.yml is the Docker Compose configuration for managing the services.
Writing Dockerfiles
Node.js App Dockerfile (app/Dockerfile):
We start by defining a Dockerfile for our Node.js application. This file specifies how to create a container for our Node.js app. Here’s what the Dockerfile for your Node.js app looks like:
# Use an official Node.js runtime as the base image
FROM node:20-alpine
# Set the working directory in the container
WORKDIR /usr/src/app
# Copy package.json and package-lock.json to the container
COPY . .
# Install app dependencies
RUN npm install
# Expose the port that your app will run on
EXPOSE 3000
# Define the command to start your application
CMD ["npm", "start"]
In this Dockerfile:
We start with the node:20-alpine image as the base image, which is a lightweight Node.js runtime based on Alpine Linux.
We set the working directory to /usr/src/app in the container.
We copy the package.json and package-lock.json files to the container to install the application dependencies.
We run npm install to install the application dependencies.
We expose port 3000, which is the port our Node.js application will listen on.
We define the command to start our application using npm start.
Nginx Dockerfile (nginx/Dockerfile):
Next, we create a Dockerfile for the Nginx container. This file specifies how to create a container for the Nginx web server with your custom configuration:
# Use an official Nginx image as the base
FROM nginx:alpine
# Copy your custom Nginx configuration file to the container
COPY nginx.conf /etc/nginx/conf.d/default.conf
# Expose the port that Nginx will listen on
EXPOSE 80
ENTRYPOINT [ "/docker-entrypoint.sh" ]
CMD [ "nginx", "-g", "daemon off;" ]
Here’s what’s happening in the Nginx Dockerfile:
We start with the official nginx:alpine image as the base image, which is a lightweight Nginx image based on Alpine Linux.
We copy our custom Nginx configuration file, nginx.conf, to the container. This file defines how Nginx will serve your Node.js application.
We expose port 80, the default HTTP port that Nginx will listen on.
We set the entrypoint and command to start Nginx and run it in the foreground.
Docker Compose Configuration
The Docker Compose configuration, defined in docker-compose.yml, brings everything together and manages the containers. Here's what the docker-compose.yml file looks like:
version: '3'
services:
postgres:
image: postgres:alpine
container_name: postgres_db
environment:
POSTGRES_USER: your_postgres_user
POSTGRES_PASSWORD: your_postgres_password
POSTGRES_DB: your_postgres_database
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- app_network
nodejs_app:
build:
context: ./app
dockerfile: Dockerfile
container_name: node_app
depends_on:
- postgres
volumes:
- ./app:/usr/src/app
expose:
- "3000"
networks:
- app_network
nginx:
build:
context: ./nginx
dockerfile: Dockerfile
container_name: nginx_server
ports:
- "80:80"
depends_on:
- nodejs_app
networks:
- app_network
volumes:
postgres_data:
networks:
app_network:
driver: bridge
In this Docker Compose configuration:
We define three services: postgres, nodejs_app, and nginx.
The postgres service uses the official PostgreSQL image, sets environment variables for database configuration, and uses a volume to persist PostgreSQL data.
The nodejs_app service is built using the Dockerfile in the app directory, depends on the postgres service, and exposes port 3000.
The nginx service is built using the Dockerfile in the nginx directory, depends on the nodejs_app service, and maps port 80 on the host to port 80 on the container.
Volumes are defined to persist PostgreSQL data.
A custom bridge network named app_network is created to connect the services.
Building and Running the Containers
With your Dockerfiles and Docker Compose configuration in place, you can now build and run the containers. Open a terminal in the your_project directory and run the following command:
docker-compose up --build
This command builds the Docker images based on the provided Dockerfiles, creates and starts the containers, and connects them on the specified network.
Once the containers are up and running, you should be able to access your Node.js application through Nginx. Open a web browser and navigate to http://localhost. Nginx will serve your Node.js application, and you should see "Hello World."
Conclusion
In this post, you’ve learned how to structure your project, write Dockerfiles for your Node.js application and Nginx, and use Docker Compose to manage your containers effectively. This approach allows you to create a portable and reproducible development and deployment environment, making it easier to work on web applications and collaborate with others. You’ve also set up PostgreSQL and ensured that data is not lost when containers are destroyed by using Docker volumes.
By following these steps, you can confidently build and deploy containers for your web applications, improving the development and deployment processes.
Top comments (0)