DEV Community

Cover image for Part 2/3: Deploy Scalable NestJs Chat App to Kubernetes
zenstok
zenstok

Posted on • Edited on

Part 2/3: Deploy Scalable NestJs Chat App to Kubernetes

In our previous blog post, we explored how to write a scalable NestJS chat app using a distributed architecture and WebSockets for real-time communication. Building upon that, we will now dive deeper into improving the app's efficiency and deploying it on Kubernetes using minikube. If you're eager to check out the source code, you can find it in the GitHub repository here (make sure you check out on improved-version+deploy-on-kubernetes branch). Additionally, if you find this post useful, let me know in the comments, and I'll create another tutorial focusing on deploying a production app with Google Kubernetes Engine (GKE). This tutorial will cover essential features like configuring a static IP, associating a domain name, obtaining a TLS certificate for secure communication, and setting up a load balancer for distributing traffic between instances.

Improving the Previous Model

In the previous post, we stored an array of connected clients in each instance and triggered events to notify all instances about new messages. In this paradigm, we broadcast messages to all instances regardless if they have associated ws clients which needs to receive the message or not. To enhance this model and optimize the broadcasting process, we can leverage Redis. Here's how we can achieve this improvement:

  1. Consolidating Redis Channels:
    Instead of having three channels for different types of messages (send to all, send to many, send to one), we will use a single global channel for broadcasting global messages (send to all). Additionally, we will dynamically create user-specific channels in the format: ws_socket_client/{USER_ID}. Each instance that holds WebSocket clients will basically subscribe to these channels. This helps optimize the broadcasting process and ensures that messages are only sent to the relevant instances with the corresponding clients.

  2. Implementation Changes:
    To implement this change, we need to listen for WebSocket connect and disconnect events and subscribe/unsubscribe the Redis client to each client's respective channel. Here are the code changes:

  • In the constructor, we now subscribe only to the channel for sending messages to all clients:
this.subscriberRedis.subscribe(this.sendWsMessageToAllClientsRedisChannel);
Enter fullscreen mode Exit fullscreen mode
  • Create a new private method to generate the user-specific channel name:
private getClientIdRedisChannel(userId: string) {
  return `${this.wsSocketClientRedisChannel}/${userId}`;
}
Enter fullscreen mode Exit fullscreen mode
  • Subscribe to the user-specific channel in the addConnection method:
this.subscriberRedis.subscribe(this.getClientIdRedisChannel(userId));
Enter fullscreen mode Exit fullscreen mode
  • Unsubscribe from the user-specific channel in the removeConnection method:
this.subscriberRedis.unsubscribe(this.getClientIdRedisChannel(client.userId));
Enter fullscreen mode Exit fullscreen mode
  • Update the subscriber client to listen to the global channel for sending messages to all clients, as well as the user-specific channels:
this.subscriberRedis.on('message', (channel, message) => {
  const data = JSON.parse(message) as RedisPubSubMessage;
  if (data.from !== this.redisClientId) {
    switch (true) {
      case channel === this.sendWsMessageToAllClientsRedisChannel:
        this.sendMessageToAllClients(data.message, false);
        break;
      case channel.startsWith(this.wsSocketClientRedisChannel):
        this.sendMessageToClient(
          (data as RedisPubSubMessageWithClientId).clientId,
          data.message,
          false,
        );
        break;
      // no default
    }
  }
});
Enter fullscreen mode Exit fullscreen mode

You will also notice that on this branch, I updated the chat gateway class, now it has 3 types of messages it subscribes to:

enum SubscribeMessageType {
  SendChatMessageToOneParticipant = 'send_chat_message_to_one_participant',
  SendChatMessageToManyParticipants = 'send_chat_message_to_many_participants',
  SendChatMessageToAllParticipant = 'send_chat_message_to_all_participants',
}
Enter fullscreen mode Exit fullscreen mode

In order to test it in the browser, you can follow the previous tutorial where I discussed how to set up the database and how to obtain bearer tokens for authenticated users.
Also, the content of the websocket message should follow the format of our DTO classes so it can be received correctly:

export class SendMessageToAllDto {
  @IsString()
  @Length(1, 5000)
  message: string;
}
Enter fullscreen mode Exit fullscreen mode
export class SendMessageToManyDto extends SendMessageToAllDto {
  @IsArray()
  @IsUUID(undefined, { each: true })
  participantIds: string[];
}
Enter fullscreen mode Exit fullscreen mode
export class SendMessageToOneDto extends SendMessageToAllDto {
  @IsUUID()
  participantId: string;
}
Enter fullscreen mode Exit fullscreen mode

For example if you want to send multiple messages you can use something like this in the browser:

users.forEach(user => ws.send(JSON.stringify({event: 'send_chat_message_to_one_participant', data: {message: 'test', participantId: user.id}})));
Enter fullscreen mode Exit fullscreen mode

To test this improvement, connect and disconnect multiple clients in the browser and observe the behavior in the Docker console.

Deploying the Stack on Minikube

To deploy the chat app on Minikube for local development and testing, follow these steps:

  1. Install Minikube by referring to the official tutorial at
    https://kubernetes.io/docs/tasks/tools/.

  2. Once Minikube is installed, start the Minikube cluster using the command:

    minikube start
    
  3. Inside the k8s directory, you'll find the deployment files for the backend, Redis, and PostgreSQL, along with the corresponding services and volumes. We have set up environment variables within the backend deployment to ensure connectivity with Redis and PostgreSQL. The deployment is configured with five replicas to create a total of five backend pods. To test the deployment locally, the backend service is set as a node port, allowing it to be exposed to your local machine.

  4. To deploy the app, navigate to the project directory and run the following command:

    kubectl apply -f k8s
    

    This will set up the entire app along with its dependencies. Please note that we are using the zenstok/scalable-chat-app-example-backend image, which is built from the improved-version+deploy-on-kubernetes branch available at https://github.com/zenstok/nestjs-scalable-chat-app-example.

  5. Initiate database inside kubernetes

    Find a backend pod name:

    kubectl get pods
    

    terminal-output

    Access a shell inside a backend pod:

    kubectl exec -it backend-deployment-7ddd9c4769-fxqxg -- sh
    

    Run db init command:

    yarn db-init
    

    If you want to port-forward kubernetes db to inspect it, you can do so by running:

    kubectl port-forward deployment/postgres-deployment 5432:5432
    
  6. To access the deployed service locally, expose it using the command:

    minikube service backend-service
    
  7. Interact with the deployed app locally and verify its behavior.

    You will get something like this in the console:

    terminal-output

    It will start a tunnel for backend-service which can be accessed locally at this address (beware that the port might be different on your machine):

    http://127.0.0.1:61939
    

    Now you can interact with the app exactly as you interacted with the docker configured app.
    For example, to access swagger docs, you just hit this in the browser:

    http://127.0.0.1:61939/docs
    

    Also, if you want to inspect the logs from all backend pods, you can do so by running:

    kubectl logs -l app=backend -f
    
  8. Once you have finished testing, clean up the Minikube cluster by deleting it:

    minikube delete
    

In this blog post, we explored how to enhance the scalability of a NestJS chat app by leveraging Redis and implementing user-specific channels. By optimizing the broadcasting process, we reduced networking costs and improved overall efficiency. Additionally, we learned how to deploy the app on Kubernetes using Minikube for local development and testing.

Your feedback and comments are highly appreciated, so please share your thoughts and experiences. Stay tuned for new posts!

Top comments (0)