DEV Community

Cover image for Easiest way to Load Balance with Nginx
Ashraf Minhaj
Ashraf Minhaj

Posted on

Easiest way to Load Balance with Nginx

Traffic or Single Point of failure?

So your users are increasing, that's a good thing but you are worrying that your server might not be able to handle such a traffic. Or, you can't sleep because of Single Point of Failure. If that one server somehow gets destroyed or anything goes south, your app is no longer available for users. So you are dreaming of using two servers (or more) to run your application and a way to balance load to the users.
Let's learn how we can setup nginx as a load balancer to serve a single application.

What we are going to do

We are going to run an app, the same app on two servers. Well servers have IP addresses but we can not give users two IP/Domain names to access the same app right? User should not know which server is serving. A load balancer will handle that. So a load balancer will sit in between users and servers to distribute traffic to each servers so that not a single server gets more load than the other. And if a server somehow gets destroyed, the app will still be online (because other server is working).

We will run an app which will display server specific information like hostname and ip adress. Upon calling from the load balancer we will see which server is responding, a very simple way to play with and learn load balancing.

So let's do it.

Step1: Create two servers

Create two linux servers, actually as many as you like. But for this tutorial, let's go with two.
I am using my laptop running debian on it and a RaspberryPi, so -

  1. server1 (debian)
  2. server2 (raspberry pi, also debian)

See, even if those machines are physically different, nginx will still work for the both.

You can pick any kind of servers like ec2, droplets or your local machines (computer, laptop etc.).

Step2: Run app on both servers

Which app? Well I have already written and dockerized an app that you can run on any machines. This app shows container hostname, ip adress etc.
Before that install docker on servers, follow official doc to install.
Now SSH into your servers and run -

sudo docker run -d -p 8080:8080 ashraftheminhaj/ip-fetcher
Enter fullscreen mode Exit fullscreen mode

Now if you run -

curl localhost:8080
Enter fullscreen mode Exit fullscreen mode

You should see something like this -

response

local IP is your container IP, and public IP is .. well your public IP. Don't share with anyone.

Now run the same app on server2. Upon curling on it you should see different hostname and localip.

Setp3: Setup Nginx

a. Get IP address of each servers

If you are on ec2 or cloud servers, you already know the ip adresses. Note them.

If you are on your local machines run -

ip addr show
Enter fullscreen mode Exit fullscreen mode

or,

ifconfig
Enter fullscreen mode Exit fullscreen mode

get ip address

My machine had both ethernet and wifi on, so I got two ip addresses. It should be wlan0 for your case.

b. Configure nginx

Ideally you can install nginx in a 3rd server. But you can also install it any of the two servers you just configured. I ran the app on a raspberry pi and used at as the load balancer as well.

Insall nginx using

sudo apt-get install nginx
Enter fullscreen mode Exit fullscreen mode

then if it is running check

sudo systemctl status nginx
Enter fullscreen mode Exit fullscreen mode

or http://serverip:80 (you can ignore 80 too) and you should see nginx default page like this -

nginx

This means nginx is installed and is working fine. Now we can add a simple configuration to make behave like a load balancer.
Run -

sudo nano /etc/nginx/conf.d/load_balancer.conf
Enter fullscreen mode Exit fullscreen mode

A text editor should show up (nano text editor). Paste the following lines -

upstream backend {
    server server1_ip:8080;
    server server2_ip:8080;
}

server {
    listen 80;

    location / {
        proxy_pass http://backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}
Enter fullscreen mode Exit fullscreen mode

So, anything that comes to load-balancer-server on 80 port it will be forwarded to any of the backend servers on their 8080 port. As simple as that.

Close the file by pressing ctrl+x then type y then enter. Now we can restart the nginx to apply the settings -

sudo systemctl restart nginx
Enter fullscreen mode Exit fullscreen mode

Is it working?

Type the ip of the load-balancer-server on your browser and you should see host name and ip addresses. Refresh the page, you should see different hostname and ip now. So the load is being distributed.
For testing purpose you can stop a server and you'll see even after that the application is online. Cool, right?

Conclusion

I hope you have learnt something cool today. Please let me know your feedback or your suggestions.
BTW, do not forget to stop the servers/instances after playing to reduce cost. Happy coding!

Top comments (0)