Best Practices for Event-Driven Microservice Architecture

Jason Skowronski on September 24, 2019

If you’re an enterprise architect, you’ve probably heard of and worked with a microservices architecture. And while you might have used REST as you... [Read Full]
markdown guide

Great Article! We've stumbled upon all of this separately. Cool to see it all in one place. How do you handle "replays" or getting to current state? In a highly decoupled, event based system, microservices don't really know who created the event. So, how would a "new" microservice get up to date with a stream that might be several months or years old? Seems impractical to read back all of the messages of an event stream.

Is a REST endpoint ok to use? But would also introduce some coupling.



Great question! I think it depends a little bit on your use case. In my last company we had a limited data retention window so replaying from the beginning of the window was slow but worked. For months or years of data, it sounds like you need a different approach. If the size of the current state is less than the total number of state changes over time (eg. an account balance not an account history), then I have some suggestions. You could load the current state from another canonical source (eg. from a REST endpoint, a database or a replica). Alternatively, you could periodically checkpoint or back up the state to a persistent store, and just replay events from the last checkpoint time to catch up. It's even possible to store checkpoints to another Kafka partition to avoid adding other service dependencies. Hopefully one of those ideas helps :)


Thanks! seems like a combination of snapshots and/or REST endpoints might be the way to go. I was trying to avoid having one service talk directly to another, but "catching up" might be an exception =)


Too funny. Hadn't heard of Pulsar so when I looked it up and saw it was an Apache project I immediately started thinking, "please, nozookeeper nozookeeper nozookeeper nozookeeper, ..., OH G----MN F---KING ZOOKEEPER! Kill it with fire!"


"REST is much simpler to set up and deploy"

I don't know that I agree with this statement, at least not entirely. If you're setting up all the infrastructure, absolutely.

Pick a tooling like serverless, and it's arguably simpler to set up an event driven app. I can have the skeleton up and running on AWS or Google in very little time.


What's your view on the idea that microservices is only advantageous for applications with enough complexity?

For low-complexity apps, it suggests that it's better to start monolith (applying loose coupling principles) and only migrate to microservices if really needed in the future.


Yes I definitely agree with this. It's best to avoid premature optimization and unnecessary abstractions. A low complexity app is the ideal design if it fits your business requirements. A microservice architecture adds complexity and latency over method calls, so you need to have a benefit that offsets that.


Hi Jason,

Thanks for the article. Where do Amazon SNS and Google Pub/Sub fit in the picture? When would you recommend these services over Apache Kafka?


Those are both great for managed pub/sub messaging. However, they don't guarantee ordering. If order matters to you, such as for time-ordered logs or metrics, then you might want to use Apache Kafka for Heroku or Amazon Kinesis instead. Also consider the delivery guarantees. SNS tries a limited number of times. For more control over expiration, or to reprocess data, then something like Kafka will be better.


This is how you make microservices. Anyone that makes SOA should use this. We have so many dependant microservices at work it's a distributed monolith painful to work with.


There's an easy way make your stuff reactive using Kalium.

code of conduct - report abuse