DEV Community

Discussion on: Understanding event-sourcing using the Booster Framework

Collapse
 
marcastr0 profile image
Mario Castro Squella

Hi Benjamin! The framework core interacts with the different cloud providers via the provider-specific packages. In AWS's case, there's

The first one contains a series of adapters that implement a generic interface for the framework components (events, read models, and so on) but for AWS resources (e.g., DynamoDB for storing events, etc.).
The second package is used to provision and configure (using the AWS CDK) the cloud resources required by the framework-provider-aws package. For a more in-depth look, please feel free to look at these packages on our GitHub.

While it may seem like a black box at first, the idea is to provide a common interface to quickly understand how each function is implemented in each cloud provider. By understanding this common interface, you could implement your provider specific to your own compliance needs.

Anyhow, thanks for pointing out that it's not clear from the documentation. We'll take that into account!

Collapse
 
javier_toledo profile image
Javier Toledo

Yeah, the intention for Booster is far from becoming a black box that hides what's inside. The idea is to offer a set of standards and abstractions that make development easier 99% of the time, but not at any cost!

One of the project's goals is that a team can either pick a pre-built infrastructure package or build their own, so they can work on infrastructure only once and focus on the business logic the rest of the time.

The default AWS implementation uses DynamoDB and solves many challenges out of the box like scalability, message ordering, or eventual consistency. This is perfect for people learning about event sourcing, early-stage startups, or organizations already in AWS. Still, as Mario mentioned, it's straightforward to build your own implementation if you need it. If you want to work on existing infrastructure, you just need to implement the ProviderLibrary interface to tell the framework how to store an event, how to read it, and a few more basic operations. Optionally, you can also use some infrastructure-as-code solution to provision the environments with Booster too. The framework doesn't make any assumptions on what's behind, so you can use any stack you want.

Indeed, there's a nice multi-cloud demo in our Youtube that leverages on this architecture to deploy a single codebase to AWS, Azure and Kubernetes by just providing a separate package for each: youtube.com/watch?v=MHw_.9tcqjz0

We should definitely work on that part of the documentation; we've been focusing on the user-level docs first, but it's becoming more and more important to talk about extensibility. Thanks a lot for pointing it out.