Hello everyone,
Thank you for joining in for this article. Today we will talk a little about one of the most popular solutions out there to handle load balancing and reverse proxying, which is Nginx. We will also make use of Docker compose and integrate it with a small nodejs application for our example.
prologue & motivation
My motivation for this articles is coming from a side project that i was working on not long ago, and I reached a point in the development roadmap where I wanted to start and test things. I wanted to have a complete CI/CD pipeline that takes care of the many different steps in the deployment of this application, and one of this steps is actually a stress test, but alas this cannot be achieved easily. The end goal is to build a microservice environment to support this application, but we are too far away from there, let alone thinking of it. I concluded that it all must exist in my workstation. All I have a laptop with Ubuntu Desktop OS installed, Docker and Docker Compose and with this we will win.
prerequisites
Before we begin, as always we want to know what is the minimum for us to be able to start, be efficient and productive.
setup
Please open vscode and open the build in terminal. You can use the short key
ctrl + `
Now let's create a folder for this project and change directory into it
mkdir nginx_demo
cd nginx_demo
Now we want to have the context of this folder inside our vscode by restarting it within nginx_demo folder
code -r .
setup example app
I want to point out that this app could be anything you want. It doesn't have to be Node.js nor a REST API. You can choose what ever programming language or framework that you feel comfortable with. I chose this example because it is minimalistic and most important is that the app is not the focus.
Let's start with our Node.js Express application. It is a very basic setup and we wouldn't need to configure much.
We will create a folder named app and initialize a Nodejs project inside it
mkdir app
cd app
node init -y
Once the project done initilzing, we want to install couple of npm packages to setup our REST API. We will use Express.js a minimalistic web server for Node.js and cors package to handle CORS and persist it for our production setup.
npm i -S express cors
For our development setup, we will use nodemon package and also install several packages for type inference.
npm i -D nodemone @types/{node,express,cors,nodemon}
Now we want to create a file named index.js and add the following code to it. The code is very simple and straight forward. We are creating an express app and we are using cors package to handle CORS. We are also using the os package to get the hostname of the machine that is running the app, we will need it later. We are also using the express.json() and express.urlencoded() to handle the request body.
// index.js
const express = require("express");
const cors = require("cors");
const os = require("os");
const PORT = 3333;
const HOST = "0.0.0.0";
const app = express();
app
.use(cors())
.use(express.json())
.use(express.urlencoded({ extended: true }));
app.get("/", (req, res) => {
return res
.json({
message: "Hello from /",
host,
os: os.hostname(),
})
.status(200);
});
app.listen(
PORT,
HOST,
console.log(`Example app listening at http://localhost:${PORT}`)
);
Let's test our, but beforehand we want to add a script to our package.json file to run our app using nodemon. Add the following line to the scripts section of the package.json file.
"dev": "nodemon index.js",
"start": "node index.js"
Now we can run our app using the following command
npm run dev
You can open the browser and navigate to http://localhost:3333 and you should see the following response
{
"message": "Hello from /",
"host": "localhost",
"os": "..."
}
Now we want to create Dockerfile for this application so we could dockerize it. Create a file named Dockerfile and add the following code to it. We are using the official node image from docker hub and we are copying the package.json file and installing the dependencies. We are also copying the rest of the files and folders and running the app using the npm start command.
# api dockerfile
FROM node:alpine
WORKDIR /app
COPY package*.json .
RUN npm install
COPY . .
EXPOSE 3333
STOPSIGNAL SIGTERM
CMD ["npm", "start"]
We can now build our image using the following command and test that it actually works, but I will leave it to you.
docker build -t api .
docker run -p 3333:3333 api
setup nginx as reverse proxy
We will change directory back to the root folder nginx_demo and we will create a folder named nginx and change directory into it.
cd ..
mkdir nginx
cd nginx
We need to create 2 files for our nginx setup.
touch nginx.conf Dockerfile
In this Dockerfile for our nginx image, we are using the official nginx image from docker hub and we are copying the nginx.conf file to the nginx configuration folder. We are also exposing port 80 and we are using the STOPSIGNAL directive to stop the container gracefully.
**# nginx **dockerfile
FROM nginx:alpine
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
STOPSIGNAL SIGTERM
CMD ["nginx", "-g", "daemon off;"]
Now we want to create a file named nginx.conf and add the following code to it.
# nginx.conf
# best practice to define the number of worker processes and events block
worker_processes 1;
events {
worker_connections 1024;
}
# http block
http {
sendfile on;
# upstream servers that will handle the requests
upstream myapi {
# load balancing algorithm
least_conn;
# api is the name of my custom container
# 3333 is the port that the app is listening to inside the container
# custom Docker image name and port number are totally arbitrary
server api:3333;
}
server {
listen 80;
location / {
# the upstream name port 80 is proxying to
proxy_pass http://myapi;
gzip_static on;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
Let's explain what we have in this nginx.conf file. We are using the upstream directive to define a group of servers that will handle the requests. We are also using the server directive to define the server that will handle the requests. We are also using the location directive to define the location of the requests. We are also using the proxy_pass directive to pass the requests to the upstream servers.
setup docker compose
We will change directory back to the root folder nginx_demo and we will create a file named docker-compose.yml and add the following code to it.
We will create a setup that ensures that everything works and once we are done with it, we will discuss on some caveats and how to overcome them.
cd ..
touch docker-compose.yml
version: "3.8"
networks:
nginx_demo:
name: nginx_demo
services:
api:
image: api
build:
context: ./app
dockerfile: Dockerfile
ports:
- 3333:3333
networks:
- nginx_demo
Now let's test the setup using the following command
docker-compose up -d
This should provide us with a running docker container that is listening to port 3333. We can test it using the following command
curl http://localhost:3333
Or we can open the browser and navigate to http://localhost:3333 and we should see the following response
{
"message": "Hello from /",
"host": "localhost",
"os": "..."
}
We can see that everything is working as expected. Now we want to add the nginx service to our docker-compose.yml file. We will add the following code to it, but before we want to kill the running docker container.
docker-compose down
We will add the nginx service to our docker-compose.yml file. We will add the following code to it.
mynginx:
image: mynginx
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- 80:80
networks:
- nginx_demo
If we run the docker compose spin up command again, and try to access the browser in http://localhost:80 we will see that we are refered to the api service. This is because we are directly binding the port of the application and we are using the nginx service as a proxy. This is not optimal and can cause some issues.
The reason for it is if you remember our motivation for this article is to stress test our application. When we explicitly bind ports to the host machine, we are limiting the number of requests that can be handled by the application. We want to be able to scale the application and the nginx service independently. We will change the api service in the docker-compose file and remove the port binding between the host machine and the container.
services:
api:
...
# pay attention to this line
ports:
- 3333
...
Once we achieved this we could carry on and test the setup again. We will run the following command
docker-compose up -d
pay attention that if you try open the browser and navigate to http://localhost:3333 you will get an error. This is because we are not binding the port to the host machine. We can test it using the following command
curl http://localhost:3333
but if you try to open the browser and navigate to http://localhost:80 you will see that everything is working as expected.
{
"message": "Hello from /",
"host": "localhost",
"os": "..."
}
scaling
By now we concluded that we have a working setup with nginx as a reverse proxy and we have only 1 instance of our application, which in this example a very simple REST API. We want to scale this application under the api service independently. We will change the docker-compose.yml file and add the following code to it.
services:
api:
...
# pay attention to this line
scale: 5
...
In this change we are actually telling docker compose to spin up 5 instances of the api service. You need to pay attention to the little nuance that there are no port bindings between the host machine and the container. Remember that the API application only listents to port 3333 inside the container, but there could not be more than 1 instance of the application listening to the same port. This is why we are not binding the port to the host machine. We are relying on the nginx service to handle the requests and proxy them to the api service. In this case what will happen is that the docker will assing random ports to the api service and we will not be able to access them directly from the host machine unless we will use the docker inspect command to get the port number and use it to access the api service directly. We will not do that, but we will use the nginx service to handle the requests for us.
We can test it using the following command
docker-compose up -d
LEt's see all the different instances work in real time. We will open the browser in http://localhost:80 and we will see that the response is coming from different instances of the api service. If you remember when we setup our api, we used the os build in package to get the hostname of the machine that is running the app. We can see that the hostname is different for each request and it is actually displaying the id of each running docker container.
{
"message": "Hello from /",
"host": "localhost",
// different for each request
// matching the id of the docker container
"os": "..."
}
bonus
We can add ensurance layer to our setup so we will not be trying to use the api service in the nginx service before the api service is ready. We can solve this by adding the depends_on directive to the nginx service and we will add the following code to it.
services:
mynginx:
...
depends_on:
- api
...
Another thing that we need to do to keep us safe, otherwise it is going to be a pain in the neck to debug, is to remove the ports directive from the api service. This little change will ensure that we will not be able to access the api service directly from the host machine.
we can also add the restart directive to the api service and we will add the following code to it.
services:
api:
...
restart: always
...
This is the final version of our docker-compose.yml file
version: "3.8"
networks:
nginx_demo:
name: nginx_demo
services:
api:
image: api
restart: always
scale: 3 # this could be any number as long your hardware can support it
build:
context: ./app
dockerfile: Dockerfile
networks:
- nginx_demo
mynginx:
image: mynginx
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- 80:80
networks:
- nginx_demo
depends_on:
- api
epilogue
Thank you for reading this article. I hope you enjoyed it and learned something new. If you have any questions or comments, please feel free to reach out to me. I will be more than happy to help you out.
Top comments (0)