This blog is the first part of a multi part series.
- Simple Go Chat Application in under 100 lines of code - Part 1
- Simple Go Chat Application in under 100 lines of code - Part 2
In the first part of our series, we explored the implementation of a basic broadcast chat application in Golang, leveraging WebSockets for real-time communication. Now, in Part 2, we delve deeper into enhancing the scalability of our chat application. We'll address a critical limitation of our initial approach and introduce Redis pub-sub as a solution. Join us as we optimize our chat app for scalability and robustness. Let's dive in!
To understand the scalability issue, let's first run multiple instances of our web application.
Lets begin with creating a Dockerfile for our application.
FROM golang:latest
WORKDIR /go/src/app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN go build -o go_chat .
EXPOSE 8080
CMD ["./go_chat"]
The above Dockerfile first downloads the dependencies using go mod download
and then compiles our main.go
to a binary named go_chat
, which is then run inside the container.
We will be running an nginx container that will balance the load across multiple instances. Let’s create a docker-compose.yml
file with the following content.
version: "3"
services:
app:
build: .
deploy:
replicas: 3
nginx:
image: nginx
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
depends_on:
- app
The above docker-compose file will create three replicas of our web application and an nginx server that will be accessible through http://localhost . It also creates a volume that maps a file nginx.conf
to the file /etc/nginx/nginx.conf
inside the nginx container.
Let’s create the nginx.conf
file now.
events {
}
http {
upstream go_chat {
server app:8080;
}
server {
listen 80;
location / {
proxy_pass http://go_chat;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_set_header Upgrade websocket;
proxy_set_header Connection Upgrade;
}
}
}
This configuration makes sure that any requests reaching localhost:80 are proxied to our web instances. It also sets some headers that are required for websockets to work as expected.
Also, we need to update the url used to create the websocket connection in index.html
from localhost:8080/ws
to localhost:80/ws
since all requests will go through the nginx server.
Time for some experimentation!! Let’s run the docker containers.
docker-compose build
docker-compose up -d
Now, let’s test our chat app by opening localhost
in two browser windows side by side.
Why isn’t our app working? Let's look at the server logs generated by individual containers.
From the logs, we can see that both the browsers' websocket connections are handled by different server instances and so messages broadcasted by go_chat-app-1
instance do not reach the clients connected to go_chat-app-2
instance, as each server only broadcasts to the connections handled by them.
Now, to solve this problem, we will be using the Pub/Sub feature of redis. Other than being a database, redis can also act like a message broker and we will be leveraging that in our web application. Lets install the redis client library written in go using go get
go get github.com/redis/go-redis/v9
Let’s create a redis client in the main
function and pass it to the serveWs
function.
func main() {
redisHost := os.Getenv("REDIS_HOST")
redisPort := os.Getenv("REDIS_PORT")
rdb := redis.NewClient(&redis.Options{
Addr: fmt.Sprintf("%s:%s", redisHost, redisPort),
})
// Rest of the main function
}
Let’s rewrite the serveWs
function like this
func serveWs(rdb *redis.Client) func(c *gin.Context) {
return func(c *gin.Context) {
upgrader := websocket.Upgrader{CheckOrigin: func(r *http.Request) bool { return true }}
conn, err := upgrader.Upgrade(c.Writer, c.Request, nil)
if err != nil {
log.Printf("Error in upgrading web socket. Error: %v", err)
return
}
go handleClient(conn, rdb)
}
}
Let’s see what has changed
-
serveWs
is now a function that accepts a redis client and returns a gin handler. - The redis client is passed to the
handleClient
function.
Now let's modify the handleClient
function
const channel = "chat"
func handleClient(c *websocket.Conn, rdb *redis.Client) {
defer func() {
delete(clients, c)
log.Println("Closing Websocket")
c.Close()
}()
clients[c] = struct{}{}
for {
var msg Message
err := c.ReadJSON(&msg)
if err != nil {
log.Printf("Error in reading json message. Error : %v", err)
return
}
msgBytes, err := json.Marshal(msg)
if err != nil {
fmt.Println("Err marshaling", err.Error())
return
}
err = rdb.Publish(context.Background(), channel, string(msgBytes)).Err()
if err != nil {
fmt.Println("Error publishing:", err.Error())
}
}
}
The handleClient
no longer calls broadcast
to send messages to other connections. Instead every message received from a connection is published to a redis channel called chat
using the rdb.Publish
method.
But how will we consume these messages from redis? To do that, let’s run a go routine that is called inside the main
function.
func main() {
redisHost := os.Getenv("REDIS_HOST")
redisPort := os.Getenv("REDIS_PORT")
rdb := redis.NewClient(&redis.Options{
Addr: fmt.Sprintf("%s:%s", redisHost, redisPort),
})
go func() {
ctx := context.Background()
sub := rdb.Subscribe(ctx, channel)
for {
message, err := sub.ReceiveMessage(ctx)
if err != nil {
fmt.Println("Error receiving message", err)
}
if message != nil {
broadcast([]byte(message.Payload))
}
}
}()
router := gin.Default()
// Rest of the main function
}
The go routine subscribes to the same redis channel chat
and reads all messages that are published to that channel using an infinitely running for loop. The message is then broadcasted to all websocket clients using the broadcast
function.
This is how the final main.go
file will look like,
package main
import (
"context"
"encoding/json"
"fmt"
"log"
"net/http"
"os"
"github.com/gin-gonic/gin"
"github.com/gorilla/websocket"
"github.com/redis/go-redis/v9"
)
const channel = "chat"
func main() {
redisHost := os.Getenv("REDIS_HOST")
redisPort := os.Getenv("REDIS_PORT")
rdb := redis.NewClient(&redis.Options{
Addr: fmt.Sprintf("%s:%s", redisHost, redisPort),
})
go func() {
ctx := context.Background()
sub := rdb.Subscribe(ctx, channel)
for {
message, err := sub.ReceiveMessage(ctx)
if err != nil {
fmt.Println("Error receiving message", err)
}
if message != nil {
broadcast([]byte(message.Payload))
}
}
}()
router := gin.Default()
router.StaticFile("/", "./static/index.html")
router.GET("/ws", serveWs(rdb))
err := router.Run()
if err != nil {
log.Fatalf("Unable to start server. Error %v", err)
}
log.Println("Server started successfully.")
}
func serveWs(rdb *redis.Client) func(c *gin.Context) {
return func(c *gin.Context) {
upgrader := websocket.Upgrader{CheckOrigin: func(r *http.Request) bool { return true }}
conn, err := upgrader.Upgrade(c.Writer, c.Request, nil)
if err != nil {
log.Printf("Error in upgrading web socket. Error: %v", err)
return
}
go handleClient(conn, rdb)
}
}
var clients = make(map[*websocket.Conn]struct{})
type Message struct {
From string `json:"from"`
Message string `json:"message"`
}
func broadcast(msgBytes []byte) {
for conn := range clients {
conn.WriteMessage(websocket.TextMessage, msgBytes)
}
}
func handleClient(c *websocket.Conn, rdb *redis.Client) {
defer func() {
delete(clients, c)
log.Println("Closing Websocket")
c.Close()
}()
clients[c] = struct{}{}
for {
var msg Message
err := c.ReadJSON(&msg)
if err != nil {
log.Printf("Error in reading json message. Error : %v", err)
return
}
msgBytes, err := json.Marshal(msg)
if err != nil {
fmt.Println("Err marshaling", err.Error())
return
}
err = rdb.Publish(context.Background(), channel, string(msgBytes)).Err()
if err != nil {
fmt.Println("Error publishing:", err.Error())
}
}
}
Now, let’s understand how this solution works.
Each app server maintains its own set of connections as it was previously. However, when a message is sent by any client connection, it is published to the redis channel. This message is then consumed by all the app servers and they broadcast the same to the connections they maintain in the clients map. Thus, every websocket client connection will receive the message, regardless of the server to which it is connected to.
Now, let's make changes in our docker-compose file to spin up a redis container.
version: "3"
services:
app:
build: .
deploy:
replicas: 3
environment:
- REDIS_HOST=redis
- REDIS_PORT=6379
depends_on:
- redis
nginx:
image: nginx
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
depends_on:
- app
redis:
image: redis
Let’s test our solution by once again opening two browser windows side by side and sending messages.
In conclusion, we've successfully addressed the scalability limitations of our chat application by integrating Redis pub-sub. By leveraging Docker for containerization and nginx for load balancing, we've achieved a robust and scalable architecture. With Redis acting as a message broker, our application now seamlessly distributes messages across multiple instances, ensuring real-time communication regardless of server distribution. This implementation not only enhances scalability but also lays the foundation for further optimizations and feature enhancements. Cheers to building resilient, real-time applications with Golang!
Top comments (0)