DEV Community

Cover image for Benchmarking WebSocket Servers with Python!
Andrew Healey
Andrew Healey

Posted on • Edited on • Originally published at healeycodes.com

Benchmarking WebSocket Servers with Python!

WebSockets run a large part of the web today. But which servers and frameworks are the best? Well, that depends on how you define best. If you're after raw performance then the following post may be of interest to you. I will be going over some of my design notes for a small benchmark program I wrote in Python with asyncio and websockets.

Coder at work!

WebSocket Benchmarker

I wrote my Masters' thesis on benchmarking WebSockets servers and frameworks, and coded a benchmark program using Node.js and ws. The program, while effective in reaching my research goal, was not fit for publication. The goals with WebSocket Benchmarker were to learn more about asynchronous Python and to create something that other people could use. Usability here meaning some end-to-end tests, maintainability, and nicely commented code with Docstrings:

async def client(state):
    '''A WebSocket client, which sends a message and expects an echo
    `roundtrip` number of times. This client will spawn a copy of itself afterwards,
    so that the requested concurrency-level is continuous.

    Parameters
    ----------
    state : Dictionary
        A Dictionary-like object with the key `clients` --
        the number of clients spawned thus far.

    Returns
    -------
    string
        A statement when the max number of clients have been spawned.'''
Enter fullscreen mode Exit fullscreen mode

Await expression - Suspend the execution of coroutine on an awaitable object.

This program allows someone to fake a number of concurrent clients that connect to an echo server (the WebSocket implementation being benchmarked) and send a series of messages. The roundtrip time for each message is measured, logged, and lightly analyzed at the benchmark's close.

Asynchronous design means that we don't need to manually poll things to see when they're finished. And rather than passing callbacks as often done in the past, asyncio lets us await things. It's concurrency but easy.

await, similarly to yield from, suspends execution of read_data coroutine until db.fetch awaitable completes and returns the result data.

The 'clients' are coroutine functions that exist in the asyncio event loop. They await opening connections, await sending messages, await recieving messages, and await he closing handshake. After this they spawn a copy of themselves and await that too! This is how concurrency is achieved. There is always the same number of clients. Never more or less.

# create an amount of client coroutine functions to satisfy args.concurrency
con_clients = [client] * concurrency

# pass them all a 'link' to the same state Dictionary
state = dict({'clients': 0})

# run them concurrently
main = asyncio.gather(*[i(state) for i in con_clients])
loop = asyncio.get_event_loop()
loop.run_until_complete(main)
Enter fullscreen mode Exit fullscreen mode

Once you're inside an asynchonrous function, it really is as simple as it sounds:

response = await websocket.recv()

Now control flow will go elsewhere and return when appropriate.

In the coming days, I hope to use this software to update my personal benchmark rankings of WebSocket servers and frameworks and write another blog post about the results (and the trade-offs that seeking raw performance often leads to)!

Contributions are most welcome โค๏ธ.

GitHub logo healeycodes / websocket-benchmarker

Benchmark a WebSocket server's message throughput โŒ›

Build Status

๐Ÿ“ป WebSocket Benchmarker โŒš

Message throughput is how fast a WebSocket server can parse and respond to a message. Some people consider this to be a good reference of a framework/library/server's performance. This tool measures the message throughput under load by mocking concurrent clients.


2019.01.26

Now with 100% more bleeding edge โšก asyncio goodness.




Installation

Python 3.6.5+.

pip install -r requirements.txt

Usage

This program expects the host to be an echo server and measures the time between sending a message and recieving the same message back from the host. It performs this for a number of client connections simultaneously and is designed to produce repeatable results.

python bench.py will launch the benchmark and print statistics to stdout. If the log file path is to a non-file then one will be created otherwise results will be appended to the existing file.

The raw results are in CSV format with each lineโ€ฆ





Join 150+ people signed up to my newsletter on programming and personal growth!

I tweet about tech @healeycodes.

Top comments (1)

Collapse
 
karthikeyan_ profile image
karthikeyan

Very happy the see this๐ŸŽ‰, but already many people wrote this what is special in this The challenge๐Ÿ˜ตโ€๐Ÿ’ซ๐Ÿค• is writing in limited library, space, code and time. In: micropython.

Iam ready to help you and I need help๐Ÿ˜….

I gathered some information for you,

  • Micropython library: Documentation (latest)
  • Want to run in: Single core processor.
  • Connection: client and server are connected with WiFi.
  • Focus on memory in: ESP8266, ESP32
ESP8266 ESP32
RAM 320 KB 4 MB
ROM 4 MB 4 MB
Core Single Dual

If you are interested to work on this DM me Dev profile
Google chart email: karthikeyan.aas@gmail.com (please chart don't send mail)