DEV Community

Discussion on: Is Cooperative Concurrency Here to Stay?

Collapse
 
bgadrian profile image
Adrian B.G.

I may be over my head but I think Nginx, HAProxy and other proxies don't implement cooperative concurrency, but a Thread Pool with a max number of workers in their config file. They don't have a scheduler or a context switching, but I may confuse things.

The other day I was reading this related article if anyone is interested eli.thegreenplace.net/2018/measuri... I stumble upon it because it has a Go test as a comparison.

Collapse
 
nestedsoftware profile image
Nested Software • Edited

I believe the core architecture in NGINX is the use of cooperative concurrency with an event loop. Each worker process (one per CPU core) implements the event loop based on nonblocking I/O.

nginx.com/blog/inside-nginx-how-we...

When an NGINX server is active, only the worker processes are busy. Each worker process handles multiple connections in a nonblocking fashion, reducing the number of context switches.

NGINX does have thread pools to deal with the problem of dependencies that are not non-blocking though.

nginx.com/blog/thread-pools-boost-...

But the asynchronous, event‑driven approach still has a problem. Or, as I like to think of it, an “enemy”. And the name of the enemy is: blocking. Unfortunately, many third‑party modules use blocking calls, and users (and sometimes even the developers of the modules) aren’t aware of the drawbacks. Blocking operations can ruin NGINX performance and must be avoided at all costs.

Collapse
 
bgadrian profile image
Adrian B.G. • Edited

Oh I see, so the event loop is inside each worker.

I haven't got that deep, I was only touching the subject was I was learning the Worker Pool pattern as I study Go and distributed systems.

So thanks a lot for the info!

PS: the urls are broken (leading ":")

Thread Thread
 
nestedsoftware profile image
Nested Software

Oops - links fixed!