DEV Community

Amr Elmohamady
Amr Elmohamady

Posted on • Updated on

How to deploy Node.js Socket.IO server with load balancing & reverse proxy using pm2 & Nginx?

Imagine that you are building an app with chat rooms and it will have thousands of users how do you think a server could handle this load ?!

With Two concepts:

Reverse Proxy
A reverse proxy server provides an additional level of abstraction and control to ensure the smooth flow of network traffic between clients and servers.

Examples of Web Servers are Nginx and Apache.

Load Balancing
A reverse proxy server can act as a “traffic cop,” sitting in front of your backend servers and distributing client requests across a group of servers in a manner that maximizes speed and capacity utilization while ensuring no one server is overloaded, which can degrade performance. If a server goes down, the load balancer redirects traffic to the remaining online servers.


This tutorial assumes that you are able to deploy a normal nodejs app with nginx.

First, we won't be starting our app normally like node index.js, but we'll install a package called pm2:

npm i pm2

pm2 is an Advanced process manager and Load Balancer for production Node.js applications.

we'll run the app in the cluster mode (The cluster mode allows networked Node.js applications to be scaled across all CPUs available)

So set the start command to be:

pm2 start index.js -i max
Enter fullscreen mode Exit fullscreen mode

-i for number of instances and max to be scaled across all CPUs available

To stop the app:

pm2 stop index.js
Enter fullscreen mode Exit fullscreen mode

To Inspect Logs:

pm2 logs
Enter fullscreen mode Exit fullscreen mode

To restart the app:

pm2 restart index.js
Enter fullscreen mode Exit fullscreen mode

Normally, the code that start the server looks like:

server.listen(8000, () => {
  console.log("listening on *:8000");
});
Enter fullscreen mode Exit fullscreen mode

But for socket.Io we need more than one server or Port to run multiple node instances so we use something like this:

server.listen(800 + process.env.NODE_APP_INSTANCE, () => {
  console.log(`listening on *:${800 + process.env.NODE_APP_INSTANCE}`);
});
Enter fullscreen mode Exit fullscreen mode

The NODE_APP_INSTANCE environment variable is set by the app instance index so if we have four instances then we'll have localhost:8000, localhost:8001, localhost:8002, localhost:8003

This was the part of load balancing.

Now, let's go to do reverse proxy with nginx.

In nginx main config file:

http {
  server {
    # 80 for http, 443 for https
    listen 80;
    server_name example.com;

    location / {
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header Host $host;

      proxy_pass http://nodes;

      proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "upgrade";
    }
  }


  upstream nodes {
    # enable sticky session with either "hash" (uses the complete IP address)
    hash $remote_addr consistent;
    # or "ip_hash" (uses the first three octets of the client IPv4 address, or the entire IPv6 address)
    # ip_hash;
    # or "sticky" (needs commercial subscription)
    # sticky cookie srv_id expires=1h domain=.example.com path=/;

    server example.com:8000;
    server example.com:8001;
    server example.com:8002;
    server example.com:8003;
  }
}
Enter fullscreen mode Exit fullscreen mode

So, let's understand this file line by line:

First, In the server config we listen to the default port of http which is 80, 443 for https.

Then, the server name = site's domain name

Then, at the root location we set couple of headers:

  • The X-Forwarded-For (XFF) header is a de-facto standard header for identifying the originating IP address of a client connecting to a web server through an HTTP proxy or a load balancer. When traffic is intercepted between clients and servers, server access logs contain the IP address of the proxy or load balancer only. To see the original IP address of the client, the X-Forwarded-For request header is used.

  • The Host header to determine which server the request should be routed to.

we'll pass proxy_pass for now

  • http version to be 1.1 the version that supports WebSockets

  • HTTP Upgrade is used to indicate a preference or requirement to switch to a different version of HTTP or to another protocol, if possible, so here in socket.IO implementation we need to upgrade to a websocket connection

If you don't know how Socket.IO work under the hood I suggest you read this page from the Socket.IO Documentation.

  • Upstream nodes block is used to enable sticky session so Socket.IO could work and to set the servers that our load balancer will use, so we set proxy_pass in the location block to be the upstream "nodes" so it can do its reverse proxy.

Then, you will need to use an adapter to manage data between instances, so here's the link for redis adapter docs (most recommended adapter with nginx)

Now, Finally, Go Run npm start


I hope you found this article useful and please share your thoughts below :-)

For more Useful Articles don't forget to FOllOW ;-)

Twitter: @Amr__Elmohamady

LinkedIn: Amr Elmohamady

Discussion (0)