DEV Community

Discussion on: Zero dependancy Pub / Sub system with PostgreSQL

Collapse
 
parity3 profile image
parity3

I looked into postgres and loved the concept, but in practice, I was using pgbouncer as a connection pool with aggressive connection sharing, and I think pgbouncer did not support the notify/listen commands, and even if it did, I think keeping transactions open was required which meant all the waiters would need separate real connections to the db, which ate up resources and created more problems. That's not to say I couldn't have overcome that barrier but at the time I considered it too risky to try or research further.
Like you, I prefer being able to debug and fix my own applications vs worry about configuration / the learning curve of other software, particularly when my needs are specific. I just rolled my own notifications, of 2 types:

  1. When host-distributed was necessary, via a custom HTTP based server. It's basically an extension of what S3 provides, but I added persistent connection support. You could just as easily do this with HTTP 2 or websockets but I went old-school HTTP/1.1 with chunked transfer encoding for message framing. The server uses explicit scheduling (ie there's no preemptive context switching; this can be handled in many languages with task worker queue pattern but some frameworks have this built in and seamless, I used Python/Twisted but am a big fan of Python/Trio). This works around lots of potential problems (in exchange for single point of failure, SPOF). I think SPOF is really fine unless you're slack or have a justifiable need for 1 million + notifications / second (which I can't really wrap my head around). It feels like if you're getting into that realm then you've made a mistake along the way. IE Slack should have done some more infrastructure sharding/isolation at that point. Anyways, having a single-threaded server can be achieved via redis and maybe some server-side lua but I did not like the fact that redis was memory-only and had a lot of things I did not need which added complexity. The server I wrote easily supports fan-out or 1-1 "get-and-claim-next-job" patterns with the locking implicitly done via existing constructs of the language. Have more than a few types of messages or use cases? Time to run a new process and bind a new port.
  2. For mainly machine-specific, implemented an append-only binary log with auto-rotation. This I use for shipping logs, so multiple inputs and 1 worker, although you can use marker files to implement multiple workers. It leverages inotifywait, and keeps markers on multiple files (1 marker file per input per worker). I can also use fallocate to punch holes in the files (making them sparse) when data has been confirmed shipped. I also ended up writing a subscriber HTTP API for this, and could easily accomplish the same behavior as #1, while supporting native lock-free writes (because they are multiple files). However, only #1 can do back-pressure inline, which I did not implement here for #2 (different use cases).

Some things to keep in mind:

  1. This is pure opinion, but RabbitMQ (and I started with RabbitMQ) tries really hard to solve problems that they should not have tried solving. I firmly believe that retries should be handled by the client and the messaging itself should be a connection with framing support. Retries, in my use cases, have all been extremely tied to the business logic of the application and should have resided there to begin with, instead of messing around with meta data and queue routing configuration/monitoring.
  2. Notifications should not store messages with input or output. They should only be used to wake up waiters. That's it. When workers wake up, they should do enough work (on average) to make the wake up worth-while. There is a cost to everything. Try not to wake up more than you need to accomplish things. Treat the notifications as communicating with people; sometimes they drop out and you need to handle that. This is also zeromq's philosophy to an extent, although I don't have any experience with using that.
  3. Bundling in job context is a bonus that I won't get into technically, but I've found it incredibly beneficial; if you have a big task that consists of chunks that you want to spread to workers followed by a wrap-up (ie map/reduce), keep the context "open" for all worker participants until the end of the input chunks is reached, then close all the open contexts. IE prevent repeating worker task setup boilerplate, cache the setup, keep the context/files as local as possible, and then workers can focus on the real work of the chunk.