DEV Community

Cover image for Why Microservices Are Not Just Another Fad
Zach A. Thomas
Zach A. Thomas

Posted on

Why Microservices Are Not Just Another Fad

I don't like chasing technology fads. It's important to maintain a healthy level of skepticism, so you don't waste your time, or even end up making some expensive mistakes.

If your skepticism is too well-developed, however, you might overlook some solutions to your thorny problems.

The debate about microservices is important, insofar as they are not a silver bullet, and you could wind up in trouble if you think you can just "sprinkle some microservices on it" and then lean back and watch the magic happen. See this post by Martin Fowler about what kind of processes you should have in place before it makes sense to try to introduce microservices to your architecture.

But sometimes our debates are unproductive, because we get hung up on less relevant details, like how many lines of code makes it "micro," or whether serverless makes microservices obsolete. For me, that's like arguing over what color the paint job should be when you really should be deciding whether you're getting a car, a truck, or a city bus.

The real power unlocked by microservices is not about the particulars of the technology choices you make. What it's really about is enabling a workflow whereby a small team can deliver at whatever cadence they like best with no handoffs required to any other group. I call this partitioning the value streams. In waterfall workflows, we throw things over the wall and wait. We might get some feedback days later and then start the cycle again. One of the great paradoxes of software development is that we can deliver faster and reduce risk at the same time.

Imagine a company that has grown quickly, but is still not large. They have a monolithic application that is released once a week, and is feature-frozen a week before that. They're up to about two dozen full-time software engineers, so they're starting to feel the coordination overhead of working on a single large component. Here's a partial list of problems routinely faced by this group that are also solved by partitioning the value streams:

  • someone writes an inefficient database query, and the whole business goes down
  • the CI/CD build takes ten minutes to run, and if any change is made to main while that happens, it has to start over
  • people race to get their changes in before the feature freeze. If you make the cutoff, your changes will go live in a week. If you miss the cutoff, it takes two weeks to ship
  • a bug in a minor feature can take the whole business down, so a change to anything necessitates performing hours of regression testing on everything
  • some engineers want to be consulted in case something they wrote is going to be changed
  • the monolith can scale horizontally, but it is not possible to scale high-demand APIs independently of low-demand APIs
  • innovating is so risky that experiments are discouraged and the default answer is "no"
  • people responding to incidents are unlikely to have worked on the part of the application that is misbehaving
  • availability of the system is only as good as the least available subsystem
  • the dependency graph of the code is unreadable
  • the codebase suffers from high coupling and low cohesion
  • since deploys are not from trunk, it's necessary to forward port patches from the release candidate to main

As long as there are organizations addressing these problems (and others) with microservices, the argument that it's a technology fad does not hold any water.

Top comments (0)