loading...

Best practices while breaking down a monolith into microservices

mohanarpit profile image Arpit Mohan Originally published at insnippets.com ・2 min read

TL;DR notes from articles I read today.

Breaking down a monolith into microservices - an integration journey

  • Before transitioning, identify the biggest pain points and boundaries in the monolithic codebase and decouple them into separate services. Rather than the size of code chunks, focus on ensuring these services can handle their business logic within the boundaries.
  • Split developers into two teams: one that continues to work on the old monolith which is still running and even growing and another to work on the new codebase.
  • Avoid too much decoupling as a first step, you can always break it down later on. Enable logging across the board for observation and monitoring.
  • Enforce security between microservices with mutual TLS to restrict access by unauthorized clients even within the architecture, and Oauth2-based security service.
  • For external clients, use an API gateway for authentication and authorization, and firewalls and/or tokens based on the type of client.
  • Secure any middleware you use as most come without credentials or a default credential. Automate security testing in your microservices deployment procedure.

Full post here, 5 mins read


Handling distributed transactions in the microservices world

  • In the microservices context, a distributed transaction is distributed to multiple services, called in a sequence, to complete the single transaction.
  • The ACID (atomicity, consistency, isolation, durability) test is challenging for distributed transactions with microservices because you need to be able to roll back the entire sequence if a microservice later in the sequence returns a failure, but atomicity implies the transaction should complete in entirety or fail in entirety. Also, when handling concurrent requests, you might have an object from one microservice being persisted into the DB even as the second reads that same object. This challenges both consistency and isolation.
  • One solution is a two-phase commit. This method splits transactions into a prepare and a commit phase, with a transaction coordinator to maintain the lifecycle of the transaction: first, all microservices involved will prepare for a commit and notify the coordinator when ready. Then the coordinator issues either a commit or rollback command to all the microservices.
  • It guarantees atomicity, allows read/write isolation (no changes to objects until the commit), and makes a synchronous call to notify the client of success or failure. However, it is a slow method and also locks database rows which can become a bottleneck and can allow two transactions to reach a deadlock.
  • Another solution is to use asynchronous local transactions for related microservices, which communicate through an event bus, guided by a separate choreographer system that listens for success and failures from the bus and chases a rollback up the sequence with a ‘compensating transaction’. This makes each microservice atomic for its transaction, hence the operation is faster, no database locks are needed and the system is highly scalable. However, it lacks read isolation. With many microservices, this is also harder to debug and maintain.

Full post here, 7 mins read


Get these notes directly in your inbox every weekday by signing up for my newsletter, in.snippets().

Posted on by:

mohanarpit profile

Arpit Mohan

@mohanarpit

Co-founder & CTO, Appsmith. ❤️ Distributed Systems. 3X Founder - backed by YC, Sequoia Capital & Accel Partners. Strongly believe in the philosophy “Always be building"

Discussion

markdown guide