DEV Community

Sacha Clerc-Renaud
Sacha Clerc-Renaud

Posted on

Outbox Pattern: Never lose an event anymore

Today I wanted to present you a pattern that allowed us to solved the problem when you need to update your database and publish an event in a consistent manner.

Let's see the context.

Start with the why

While building the Spendesk's core banking system, my team faced a problem.
We needed to persist modifications in our database and also publish an event into our event bus.

For example you want to increase an account's balance and emit an event that describe this change so the accounting part of the system can take it into account.

The problem is that if you update your database then call your event bus. You're taking the risk that your code crashes between the DB insert and the event publish.

Image description

This was not an option for us, in Banking you just can't lose an event. You need consistency. Also ordering is important you can't swap a debit and a credit event on an account. They need to be read or consumed in the same order they were processed.

To solve this problem we discovered the Outbox Pattern.

The Outbox Pattern

The principle is pretty simple:

You write your events in a dedicated database table.

Then a Message Relay service is reading this table and replicate your events into your event bus.

Image description

The benefits of doing this are that you can leverage database transactions to ensure that your state update and your events are persisted together. This way you cannot have one and not the other. It's always both or nothing.

The problem is that you need to implement a Message Relay system that is reliable enough to publish your events consistently. Let's see how we solved this issue.

Message Relay implementation

As for us our event ordering is important we chose Kafka as our event bus. One of the benefits of Kafka is the powerful Kafka Connect API and the multiple connectors that are available.

For our message relay we decided to use the Debezium Connector for Postgres.
This connector is directly using the database's Replication Slots to read the database's CDC (Change Data Capture). This allows the connector to publish the events almost in real time and in the exact same order as they were inserted in the database.

Image description

The advantages of using such a connector are:

  • Your services has no idea there is an event bus. It just writes events in a database. So your service is agnostic to your event technology.

  • It's easy to test. You don't need to run another docker or start your event bus to test your implementation. You can just verify that your event is correctly inserted in the database.

  • Debezium Connector is robust. It avoids you the caveat of implementing the complex Message Relay logic with all the challenges it brings:

    • How do I recover from failures?
    • How do I guarantee the ordering?
    • How do I poll data from the database?
    • etc...
  • Your services will keep working if the connector or event bus is down. Your events will keep pilling up until the connector or the event bus is back online.

  • You have an event history for free in your Postgres database in case you need to search something in the past.

I hope you learned something reading this article. Don't hesitate to tell me in a comment if you already knew this pattern before and what's your feedback on it if you've used it in production.

Top comments (0)