In the last decade, microservices have become the poster child for scalable software architecture. Companies like Netflix and Amazon champion the model, attributing their ability to scale to microservices. This rise in the use and reported benefits of microservices have led to greater adoption amongst many companies. But as with every shiny new toy, there’s a dark side to watch out for—particularly when it comes to how microservices communicate.
In this post, we’ll explore why your microservices traffic should resemble an “I” and not an “L” (in most cases, of course). This might sound like an odd analogy, but it’s one that can save you from an architecture that spirals into chaos.
The Problem with L-Shaped Traffic
Microservices rely heavily on network communication to function. When one service depends on another, requests are sent across the network. This dependency tree grows as your system scales. In an ideal world, this traffic pattern would be predictable, resembling an “I”—a single line from the client to the relevant service, with minimal dependencies on other microservices.
But in many cases, we see an “L” shape emerge:
- A client request hits one service.
- That service needs to talk to another, which talks to yet another, and so on.
In the above image, it's easy to spot, but in real-world examples like the one below, it's much harder to spot.
As you can see, a single request fans out into multiple downstream calls. This might not sound bad in theory, but in practice, it can result in a nightmare scenario:
- Latency multiplies: Each additional network hop introduces latency, slowing down your system.
- Failure cascades: A single service failure can ripple across the architecture, causing widespread outages.
- Observability issues: Tracing what went wrong forces the inclusion of distributed tracing and logging tools like Jaeger and New Relic, increasing costs.
This is the dreaded L-shaped traffic. It’s complex, fragile, and a performance killer.
The Hidden Cost of Over-Communication
Let’s dig deeper into why L-shaped traffic is particularly harmful. Each microservice call consumes network bandwidth, requires serialization/deserialization, and introduces potential for errors. This creates a compounding effect where:
- Performance deteriorates: Even a system designed for high throughput struggles under the weight of excessive inter-service communication.
- Costs balloon: Your cloud provider charges you for every byte sent and received. With L-shaped traffic, your operational expenses can skyrocket.
- Developer experience suffers: Debugging a chain of service calls with dozens of logs, traces, and metrics can overwhelm your team.
In many ways, you’re defeating the purpose of microservices and you would be better off with a monolith. You can of course tackle some of these issues by building in retry logic when making HTTP, gRPC, or other network requests, but this functionally further reduces the actual usability of the system. To prevent flooding the network you then have to implement exponential backoffs, to solve the process getting stuck from long backoffs you need to cancel the request at some point, so on and so on, ad nauseam. By the end of it, you've delivered a mess of code that adds no direct business value. It doesn't even make the system more robust, as any changes to the contract of an endpoint will break all of its consumers anyways.
How L-shaping Sneaks up on Your System
As much as every engineer hates to admit it, there is only one thing allowing us to continue going into the office and working for any of the companies in tech right now: making money. This drive to make money creates sub-drives within a company that seriously harm the ability to correctly architect a system. From fast time-to-market, to product people making time promises that aren't possible, the functionality must always be improving.
In reality this results in developers taking shortcuts.
"Oh," they may say, "I have to deliver this functionality soon, but the service I work in doesn't have the data I need. I see in the Swagger doc for service B that there's an endpoint that exposes exactly what I need. Hot dog! I'll have it done in an hour!"
There's so much wrong with this. To focus up:
- Implementing against the API of a service owned by another team without telling them means that both teams will have to be on the 3am call to discuss the break in production whenever team B changes the contract or deprecates the API.
- A (possibly critical) flow now contains a sub-dependency that can be a single point of failure.
- No real design effort went into this solution; it was simply the quickest option.
Even worse, you have entire frameworks like loopback built around this pattern (which I'm still convinced exist only to increase pricing on IBM's load balancers).
Why I-Shaped Traffic is the Solution
In contrast, I-shaped traffic emphasizes simplicity:
- Direct, minimal communication paths: Services are designed to fulfill requests without relying on multiple downstream services.
- Focused responsibilities: Each service handles its domain independently, limiting the need for coordination.
- Improved reliability: By reducing the number of network hops, you reduce the probability of failure.
I-shaped traffic is lean and predictable. It’s the kind of architecture that keeps your system running smoothly even as you scale.
A good I-shaped architecture based on the complex L-shape example from before might look something like the following:
In the above example, the conscious choice was made not to send a response to the user right away during the purchase flow. Instead, they are notified later via email.
In cases like these, it can be both good for the user and the developer experience.
The Business Case for I-Shaped Traffic
Beyond technical benefits, I-shaped traffic has real business implications:
- Better user experience: Faster response times lead to happier users.
- Lower operational costs: You save on cloud bills by minimizing unnecessary data transfer and resource usage.
- Reduced risk: A simpler system is easier to maintain and less prone to catastrophic failures.
How to Achieve I-Shaped Traffic
At the core of the problem is a simple question that can be asked to determine how data should be shared among services: "Does the data this service is working with need to be up to date? Or can it be a few minutes or hours stale?"
That's it, determining the trade-off between strongly-consistent and eventually-consistent data retrieval or operation methods helps to clarify if inter-service http traffic is really needed. It's not a silver bullet, but it's a good start that I see many engineers completely ignore when designing service interactions.
Below are some steps you can take while determining how up-to-date your data should be:
Flatten Your Dependencies: Avoid deep chains of service calls. If Service A consistently requires data from Services B and C, ask whether A should own or cache that data directly. This reduces the need for repeated HTTP calls, improving performance and ensuring that A can operate independently. For strongly-consistent data, direct ownership may be necessary, whereas eventually-consistent data can rely on periodic synchronization or caching.
Adopt Event-Driven Architectures: Synchronous HTTP calls often impose strong consistency guarantees, which can lead to L-shaped traffic patterns. Instead, consider asynchronous communication through queues or pub/sub systems. This approach aligns well with eventually-consistent operations, allowing services to react to events without blocking or waiting for immediate responses. Strongly-consistent actions should remain synchronous but limited to where they are absolutely necessary.
Choose the Right Read Strategy: Build specialized read models that aggregate and cache data across services. This approach minimizes the need for real-time dependency chains. If your system can tolerate eventual consistency, read models can be asynchronously updated via queues. For scenarios requiring strong consistency, ensure the read model is tightly coupled to the source of truth but designed to minimize cascading calls.
Evaluate Communication Patterns: Not all data retrieval justifies an HTTP call. Use HTTP for operations where strong consistency is critical and the cost of failure is high (e.g., payment transactions). For less critical operations or where eventual consistency suffices (e.g., analytics, reporting), favor queues or caching mechanisms to reduce synchronous dependencies.
It may be obvious, but these improvements will likely take time and not happen overnight. That begs the question:
Are Microservices Bad?
Its a serious question to ask. Especially considering all the bad news that seems to be coming out month-by-month of all the companies moving back to monolithic architecture.
The short answer is no. But like any tool, they need to be used correctly. Microservices excel when services are loosely coupled and highly cohesive. Problems arise when teams overlook the communication overhead and design sprawling dependency graphs.
If you’re struggling with an architecture that feels more like a liability than an asset, it’s time to rethink your microservices traffic. Aim for I-shaped patterns, and you’ll avoid the pitfalls that plague so many teams.
Conclusion
Microservices are here to stay, but their success depends on how you structure their communication. By adopting an I-shaped traffic pattern, you can ensure your architecture remains scalable, reliable, and cost-effective.
Don’t let your microservices turn into a web of tangled dependencies. Focus on simplicity, and body will thank you with each full night's sleep you get.
Thanks for reading!
I'm Joel, CEO of NanoAPI and I've been fighting the good fight with and against microservices for almost 10 years now. If you found this post helpful, or you'd like to give back, please star our repo on GitHub ⭐. It really helps out.
We're currently building napi
for companies who are looking to move their monolithic APIs into microservices. It addresses many of the pitfalls associated with the transition to microservices and helps developers and architects understand exactly what their system is doing today. With this tool, we're aiming to create "microlithic development", a new approach where a team writes a monolith, and breaks it up into microservices at deploy time.
If you think this could be for you, or you'd like to become a design partner, send me an email.
Top comments (0)