DEV Community

Amr Elmohamady
Amr Elmohamady

Posted on • Updated on • Originally published at amrelmohamady.hashnode.dev

How to deploy Node.js Socket.IO server with load balancing & reverse proxy using pm2 & Nginx?

Imagine that you are building an app with chat rooms and it will have thousands of users how do you think a server could handle this load ?!

With Two concepts:

Reverse Proxy
A reverse proxy server provides an additional level of abstraction and control to ensure the smooth flow of network traffic between clients and servers.

Examples of Web Servers are Nginx and Apache.

Load Balancing
A reverse proxy server can act as a “traffic cop,” sitting in front of your backend servers and distributing client requests across a group of servers in a manner that maximizes speed and capacity utilization while ensuring no one server is overloaded, which can degrade performance. If a server goes down, the load balancer redirects traffic to the remaining online servers.


Node.js is single threaded and it runs on a single core by default, so it has a native cluster module to run multiple instances on all the CPU cores and load balance the requests on the instances.

We have two options either use the cluster module in the application code or use a process manager like Pm2.
Pm2 is more suitable for production.

First, we'll install the pm2 package globally:

npm i pm2 -g

We'll run the app in the cluster mode.

So set the start command to be:

pm2 start index.js -i max
Enter fullscreen mode Exit fullscreen mode

-i for number of instances and max to be scaled across all CPUs available

To stop the app:

pm2 stop index.js
Enter fullscreen mode Exit fullscreen mode

To Inspect Logs:

pm2 logs
Enter fullscreen mode Exit fullscreen mode

To restart the app:

pm2 restart index.js
Enter fullscreen mode Exit fullscreen mode

Now, we have our app scaled on one server, we need to have the app deployed on multiple machines as horizontal scaling. NGINX is responsible for load balancing requests on multiple servers as a reverse proxy.

In nginx main config file:

http {
  server {
    # 80 for http, 443 for https
    listen 80;
    server_name api.example.com;

    location / {
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header Host $host;

      proxy_pass http://nodes;

      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";
    }
  }


  upstream nodes {
    server server1.example.com;
    server server2.example.com;
    server server3.example.com;
  }
}
Enter fullscreen mode Exit fullscreen mode

So, let's understand this file line by line:

First, In the server config we listen to the default port of http which is 80, 443 for https.

Then, the server name = site's domain name

Then, at the root location we set couple of headers:

  • The X-Forwarded-For (XFF) header is a de-facto standard header for identifying the originating IP address of a client connecting to a web server through an HTTP proxy or a load balancer. When traffic is intercepted between clients and servers, server access logs contain the IP address of the proxy or load balancer only. To see the original IP address of the client, the X-Forwarded-For request header is used.

  • The Host header to determine which server the request should be routed to.

we'll pass proxy_pass for now

  • http version to be 1.1 the version that supports WebSockets

  • HTTP Upgrade is used to indicate a preference or requirement to switch to a different version of HTTP or to another protocol, if possible, so here in socket.IO implementation we need to upgrade to a websocket connection

If you don't know how Socket.IO work under the hood I suggest you read this page from the Socket.IO Documentation.

  • Upstream nodes block is used to set the servers that our load balancer will use, so we set proxy_pass in the location block to be the upstream "nodes" so it can do its reverse proxy.

Now, our load balancer will redirect calls to our servers and each server will redirect calls to on of its cluster instances. That is fine unless when USER_A connects to SERVER_1 then joins a room called GROUP_A and sends a message, the message will be broadcasted to all users in GROUP_A on SERVER_1 but what about other users on SERVER_2 that are in GROUP_A?
To solve this we need servers to communicate and in our case we need to use a Pub/Sub message broker so when USER_A connects to SERVER_1 the sends a message on GROUP_A, SERVER_1 will publish an event to all servers telling them to broadcast this message for all users in GROUP_A.

Socket.IO supports multiple adapters and the most recommended one is Redis adapter.

socket.io.png


I hope you found this article useful and please share your thoughts below :-)

You can also buy me a coffee that would help :)
Buy Me A Coffee

LinkedIn: Amr Elmohamady

Twitter: @Amr__Elmohamady

Top comments (0)