Introduction
Flask-SocketIO is a Python library which enables bi-directional communications between a Flask written server and SocketIO clients. Starting with version 2.0, it can support multiple server instances behind a load balancer to spread the client connections among them. This gives the application the ability to scale so that it can support very large numbers of concurrent clients.
This article will layout how to deploy a Flask-SocketIO application with Docker as well as Nginx for load balancing and Redis Message Queue. Nginx will forward distributed client requests to Flask server instances. Since each of these instances are connected to only a subset of the clients, Redis Message Queue is used to coordinate broadcasting. Our instances will be deployed as Docker containers.
The following are steps to deploy the application:
- Socket.IO Server Event
- Redis Configuration
- Docker Configuration
- Nginx Configuration
- Deployment Demo
- Application Demo
Assumptions
Before diving in, this article assumes readers to have basic understanding of Docker, Docker Compose, Redis Message Queue especially with Pub/Sub interface as well as Flask and Nginx.
SocketIO Server Event
@socketio.on("send-message")
def send_message(data):
sender = data.get("username")
if sender is not None:
sender = sender.upper()
text_message = data.get("text_message")
message = {
"sender": sender,
"text_message": text_message,
}
socketio.emit("receive-message", message)
In the above code snippet, Flask-SocketIO server is listening to the event send-message. When a client emits send-message event, the server will receive it with data, execute and then emits receive-message event which will be received by all clients listening to this event.
Redis Configuration
This component uses Redis Pub/Sub which has publishers (senders) and subscribers (receivers). Messages are categorized into channels without knowledge of any receivers subscribed to that channel. Subscribers express interest in one or more channels and only receive messages that are of interest. These subscribers have no knowledge of those channels' publishers. Flask-SocketIO provides this Redis feature out of the box. The following snippet shows message_queue argument passed to SocketIO constructor to enable the server connect to the Redis message queue. The argument's value is a Redis server URL.
socketio.init_app(app, cors_allowed_origins=["http://localhost:6001"], message_queue="redis://flask-socketio-load-balancing-redis:6379")
In this documentation, it is recommended to apply monkey patching at the top of the main script, even above imports. Since this application is using gevent, we apply monkey patching as follows:
from gevent import monkey
monkey.patch_all()
Docker Configuration
Dockerfile
FROM python:3.10-slim-buster
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
ENTRYPOINT gunicorn -k geventwebsocket.gunicorn.workers.GeventWebSocketWorker -w 1 app:app -b 0.0.0.0:5003
This Dockerfile code shows that the application will be running on Gunicorn server.
Docker Compose
version: '3'
services:
flask-socketio-load-balancing-redis:
image: redis:latest
container_name: flask-socketio-load-balancing-redis
restart: always
ports:
- "6379"
api:
build:
context: .
depends_on:
- flask-socketio-load-balancing-redis
ports:
- "5003"
flask-socketio-load-balancing-nginx:
image: nginx:latest
container_name: flask-socketio-load-balancing-nginx
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- api
ports:
- "5003:5003"
Nginx Configuration
server {
listen 5003;
location / {
proxy_pass http://backend_instances;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
}
}
upstream backend_instances {
server api:5003;
}
This configuration enables Nginx load balancer to forward client requests to our multiple flask server instances. The upstream group defines server addresses Nginx will forward requests to. In the above snippet, we defined only one server, api:5003 (Flask API), which is our docker compose api service name. Since Nginx and Flask API are connected to same docker compose network, Nginx can directly communicate with Flask API using its service name and container internal port (5003). Nginx will be able to forward client requests to different API instances using different load balancing algorithms. Docker compose enables to run multiple instances (docker containers) from one service using the flag --scale [SERVICE_NAME]=[NUMBER_OF_INSTANCES]. Here we will run our containers as follows:
docker-compose up -d --force-recreate --build --scale api=3 --remove-orphans
Docker Containers
After running our containers as above, we will have 5 containers (Redis Server, 3 Flask API Instances and Nginx Server) as shown in the image below:
Application Demo
I presented the application at Flask Conn before, it is available on Youtube:
GitHub (Flask Server):
Conclusion
This article showed how to deploy multiple instances of Flask-SocketIO server using docker. These instances will be running as docker containers. Nginx will be responsible for forwarding client requests to these instances and Redis will be broadcasting server events among all instances.
Top comments (2)
What is your OS?