DEV Community

Cover image for Why monoliths are a long term bad idea
Bruno Noriller
Bruno Noriller

Posted on • Originally published at linkedin.com

Why monoliths are a long term bad idea

TLDR: It’s not about the technical side, it’s about people!


Monoliths can be as good as anything... on the technical side, but something some people fail to consider is the Dev Experience.

People will be working with them, not only now, but in the future.

Even if you start small today, in a few years, maybe dozens or more people will have to use it... and well... they won’t like using it.

If you’re a developer... you’re probably nodding right now.


The causes:

1 - Monoliths get too big

What would you say if I showed a 1500 lines file?

You would probably be like: “What the hell?”

Even with a proper organization... dozens or hundreds of classes and functions are something overwhelming.

Then why a monolith with dozens or hundreds of files is “not a problem”? Even with a proper organization, its scope is too big to grasp.

And when you follow the usual patterns frameworks tell you to do (this applies to both front and back end), you probably have a mob of files spread over dozens of folders and subfolders that are all related to the one thing you have to use.


2 - Monoliths get old, fast...

Things get fast, and in tech, even faster.

Something from 5 years ago can be called at least outdated and something from 10 years is already ancient.

You might say something like “Java is Beatles”, but even then it’s not like you’re using Java 7 without frameworks. And if you are... then keep reading.

Take Node, for example, you can have vanilla (and with that, you have the “flavor” using Express, Koa, Fastify...) then you had Adonis 4, Adonis 5, Nest, Apollo, maybe you forgo a proper back end and use Next as BFF (or using lambdas, or Firebase/Supabase).

Possibilities are endless, and while some would be harder than others to make a monolith from... be sure someone is doing it.


3 - Monoliths are slow to change

Start that some are not even possible to change...

Sometimes the framework changes too much from one to another that it’s impossible to even think about migration...

Angular 1 to Angular 2+, Adonis v4 to Adonis v5... and what happens? Either you’re still using (and even upgrading/basing your business) in a framework that lost all official support (yes, some companies are still too invested in AngularJS) or you’re probably had to rewrite the whole thing. Maybe you’re still migrating, maybe you haven't even finished that and are already thinking of rewriting again.


The developers:

Developers, well... they want to play with the shiny new toys more than with “old stuff”.

We barely tolerate having to use code we wrote a few months ago, imagine having to use code other people made who knows when?

Why are we still using insert perfectly good framework when we could be using the new version or new framework.

And don’t get developers started in having to choose a language...
(BTW, unless you’re talking about a core business that requires something really specific or with users in the 6 figures numbers... you can probably use the equivalent of WD-40 and Duck Tape and it will work just fine...)

Finally, developers are a lazy bunch. It’s easier to just write the damn thing again than try understanding and messing with whatever you may or not have.


This leads to unhappy developers.

And unhappy developers leave. Leaving with a lot of knowledge in hopes of new green fields.

Or maybe they will push for a great rewrite... and while rewriting everything in Rust can be a stupid idea that will take too long, maybe moving a few core things from the old Ruby on Rails to Rust might be just what you need.


But what is the alternative?

As you would split the 1500 line file into multiple files (and probably in multiple folders), thinking small is probably the better idea.

Finding people to handle the Ruby on Rails monolith might be hard (and expensive), but if the ROR were just a microservice (emphasis on micro) or something inside a monorepo, then to fix it... probably even someone like me that never touched Ruby might be able to do something about it, and if worst come to worst and it starts to give too much trouble, a rewrite will be faster and easier.

And what about Svelte or Remix? Well... for the front end, you have micro front ends... so the old stuff can stay there while new stuff can be made on whatever new fad there is.


As long as you keep the stuff small...

Developers having to maintain stuff will know that the scope will always be small, some will even look up to the (possible) multiple different languages/frameworks/paradigms being used.

Others will look up to the next “new thing” since when they finish the current project being made in Next, the next one will be made on Remix.


The obvious problems

On the technical side? Yes, many obvious ones.

And I’m thinking it’s a 50/50 chance that DevOps Engineers are either thinking on hunting me down or on how fun it would be to automate something like that...


But as a developer, if I enter a new team and see a fraction of this freedom... well... what do you think?


Cover Photo by @charlybron on Unsplash

Top comments (36)

Collapse
 
codenameone profile image
Shai Almog

I have a different take.

Monolith shouldn't be confused with non-modular/modular code. Even with a single large project, everything is divided properly and easily. We can onboard a developer easily and that new person just needs to know their specific area.

OTOH I see so many companies who picked up microservices for the reasons you outlined, struggle. At first it's fun. But things get messy quickly. You need to move a developer between teams and it's tough. Getting a holistic view of the system is almost impossible. You can't move an inch without a dedicated DevOps.

Reproducing an issue with microservices becomes a nightmare. For me and my employer (Lightrun) this is good since our product is perfect for debugging these sort of failures. But as an engineer I feel this is a bad choice.

Don't get me wrong. I like Microservices. When they are the right fit.

But as Martin Fowler said you should go Monolith first. I can't think of a single case where this doesn't make sense.

Finally about Java. I think your view of Java may be out of date. Java 17 is already pretty powerful as a language/API and getting more so. The ability to instrument, IDEs and entire ecosystem is at least a decade ahead of its closest rivals. I've done a lot of work in other ecosystems recently and I have good things to say about them (especially in the getting up and running phase where Java does suck) but when it comes to real world high scale... There's literally no alternative.

The advantage of having a single team aligning around a single language/platform (regardless of the language) is huge. We can move developers around instantly. Do full stack PRs that are aimed at vertical features. Review faster and scale easily. Java isn't Beatles it's a Mac truck. Heavy, destructive and you need some skills to drive it. But it gets the job done if you have a good developer at the wheel.

Collapse
 
nssimeonov profile image
Templar++

I like Microservices.

I don't! I wholeheartedly hate them. Yes I use them a lot when I have to. And over the years I had to use them for different reasons, but willingly going for microservices architecture because of the reasons outlined in the article above is just outright stupid. I can only assume the person writing the article never had to deal with microservices in the long run. Only a junior developer can get excited about having to deal with small chunks of codebase and this is understandable. Dealing with 1 small chunk however is one thing. Dealing with 10 different chunks each running a different version is another.

Collapse
 
noriller profile image
Bruno Noriller

You might have mistaken the Java part... I know Java is a powerful comprehensive language, especially the newer versions... but my example was "Java 7 without frameworks".
But I totally agree with Martin Fowler on the technical side, what he says makes sense and in a rational world it would be the way to go.
But we're dealing with programmers here. Ten smaller "mountains" would seem much easier to conquer than one single big one, even if those 10 would amount to X times the big one.
I believe that, more and more, we should start considering the human part of the equation, not just the technical side of it.

Collapse
 
codenameone profile image
Shai Almog • Edited

I think you need to revisit Java with someone who really knows what they're doing... Unfortunately, there's a lot of outdated nonsense out there that gives the wrong impression about it. You might have gotten a wrong impression early on.

The human element is actually my concern. It isn't hard to write code or get developers to write code. Maintenance is 95-98% of what I do for a project. I don't think I'm an outlier in that regard. In our industry people change jobs faster than they change socks. We need to write code that won't go down the garbage drain because the person who wrote it got a better offer.

At Lightrun they actually take it to the extreme and force people to work on each others code. So a person get an issue assigned that's in an area code they're unfamiliar with. This prevents code rot and forces people to work as a team. It's terrible in terms of merges and code reviews, but on the plus side: the code is good. I think that's a bit extreme but I'm familiar with a company that did microservices "free style". Their product never launched.

This is obviously anecdotal but I think it also makes more sense.

Thread Thread
 
noriller profile image
Bruno Noriller

I really don't have a problem with Java.
And you're right, we most read and maintain code than we write it.

But today, people getting into the industry are learning about the newer versions and frameworks, as do current developers, as to not get "outdated".

And then you get into into a new team and see it using "Java 7" (just an example). As maintained as it is, it's not what you've been learning, you're unfamiliar, you probably have to do stuff that is already core on newer versions.
Not only that, as time passes, to upgrade the whole thing can be difficult, maybe even impossible. And you keep with it and the new LTS goes to Java18, Java19, Java20...

To maintain a small part made in Java 7 would be a lot easier if new stuff is being made in Java 17. Even with stuff here and there in Java 8, Java 11, Java 15 each with their own frameworks.
If it was built in a SOLID way, it gonna persist longer and be easier to maintain. The developers might see it as a chore to change the things in something so "outdated", but it would be a "break" from the normal flow, and not the norm.
Meanwhile the new stuff is being made with the shiny new toys that everyone is already learning and enjoying, coming with new tools that might even justify the rewrite of an old service because it can make a real impact there.

Thread Thread
 
codenameone profile image
Shai Almog

I updated COBOL systems and even wrote some COBOL myself on a PDP in my youth ;-)

I agree, systems should be more "fresh". But newer isn't always better and there's reality of stability. Imagine the millions of small microservices with outdated languages and toolchains the world forgot. 1 year old NodeJS projects are already old news and "horrible" can you imagine a 10 year old NodeJS project?

Java 7 is just over 10 years old by now.

Collapse
 
codenameone profile image
Shai Almog

I'm not a fan of async/await so I'm good with that. Project loom is coming in JDK 18 as preview and will solve the "problem" of threading in Java. I quoted problem because the approach Java took has its advantages in many use cases.

Resource use can be reduced with GraalVM which removes the overhead of the JIT and Valhalla which removes the overhead of primitive objects. Loom also removes the overhead of threads. None of those are critical though as you can still write very efficient Java code today. This is mostly benchmark optimizations. What matters is scalability and for scale and distributed computing Java is at a different level.

IMHO Java did the smart thing by not chasing language features introduced by other languages too far. A lot of those languages are a mess where compatibility is fragile at the source and binary level. Java has kept the language simple and stable. As a result it's MUCH easier to build on top of it and expect long term stability.

In my current job I need to maintain agents/tooling for Java, Node and Python. All of them are fine platforms. But the breadth and maturity of Java are at a completely different level. The docs, the instrumentation and debugging tools, etc.

I worked with C# quite a bit but it's been years ago so I have no idea what's the status for that. I always felt C# tried to take everything Java did and add on top of that. To me personally the greatest feature of Java is the stuff it didn't add, so C# was never a favorite.

Collapse
 
moopet profile image
Ben Sinclair

What would you say if I showed a 1500 lines file?

Those are rookie numbers

Collapse
 
painhardcore profile image
Andrey Yurchenkov

"Guns don't kill people"
Monolith or microservices is more architectural question. If you can't plan your monolith, why do you think you'll be better with microservices?

Collapse
 
liviufromendtest profile image
Liviu Lupei

Imagine a world where companies are led by developers.
They would never ship anything, they'd just play with new frameworks, overuse Docker and argue about what programming language is the fastest.

(just a joke)

Collapse
 
noriller profile image
Bruno Noriller

To be fair... that's how it usually starts.

Let's go with front end language/framework and back end language/framework because that's what I know.

The problem i see is that it stays like that... because it's how it's always have been done.

Collapse
 
joelbonetr profile image
JoelBonetR 🥇

It's about the inner architecture of the software itself. Monolith doesn't imply all those things, it's just a matter of moving the complexity where fits better your needs.

A big monolith probably will imply a complexity in Devs side with modular pieces built in, ready to split into services if there's a real need. Microservices move this complexity to infrastructure so we can get the same argument and tell that microservices makes infrastructure guys unhappy (weird, huh?). On the client side is paying more to more devs or paying more to Cloud providers, not much difference at all.

I've also saw multiple times ""micro""services with 1500 lines of code so no chance here.
In overall you need to know how to build the software properly and the reasons behind choosing microservices or monolith can vary a lot.

Think on StackOverflow, it's a very big monolith with a lightning fast CI/CD 🤔

Collapse
 
noriller profile image
Bruno Noriller

You're totally right, but then again, it's all about the humans.

I do believe though that it's not just monolith or microservices, something like a monorepo can have the advantages of more easily sharing code between similar applications, while being able to formally split it in smaller parts in a way that the humans feel like it's not one big block.

Collapse
 
joelbonetr profile image
JoelBonetR 🥇 • Edited

But monorepo also add complexity to the branch management + the pipeline/s that can take much longer than expected, which is not so good while trying to keep a develop live environment + it can break at any time.

A thing to solve this of course is having a live env for each branch, which is expensive, adds high complexity in infrastructure and even that does not solve the issue on your code depending on other piece of code that any other dev is building without adding more complexity linking your branch env to your mate's one.

Collapse
 
ochsec profile image
Chris Ochsenreither

“Java is Beatles” - LOL!

All valid points. There's a sentiment among some developers that they don't trust serverless because that makes them dependent on the cloud provider whereas if they keep the monorepo they can just start up a server with someone else, change their DNS entries and walk away.

Collapse
 
nssimeonov profile image
Templar++

I was thinking about this problem and while I get your point that using old technologies is limiting the people you can bring to the project as new hires may not know them or not willing to use them, I also think that you miss a key point here. The most expensive part of a software project is it's maintenance. A monolyth is easier to maintain and you need less DevOps engineers as you will be maintaining a single architecture. If you have multiple teams maintaining different platforms, each written in it's own language you will need more people to support that as well.

Collapse
 
thumbone profile image
Bernd Wechner

I think it might help if you defined what, to your mind, a monolith is.

I admit to some confusion there. You allude to a 1500 line file as a possible monolith. But then I see files about that size often enough. Never struck me as "monlithic" (albeit on the large side) . I had a Django models file larger than that until recently.

But if large files are your concern, why not talk about large files. It can't be that simple, and I sense you're talking about something broader.

But what the alternatives? I mean, my large models file got to be a bother navigate so I split it into individual files for for each model and have instead, say a dozen files with a hundred or two lines a piece.

But the code hasn't shrunk? It's just structured differently.

Then I work on a C++ project in which almost everything is in one folder. Like 1000 files in one folder. Egads.

But structuring this, by grouping conceptually similar files into sub folders is just restructuring and does not reduce size.

So I'm left wondering what you mean by "monolith".

I thought, maybe the Linux kernel which is often described that way. But that's a very particular context and again I can't help suspect you have a broader idea of "monolithic".

But if it's just size, then I am stuck again, because a given job demands a given size. Big, complex jobs grow large code bases. Small simple jobs demand only small code bases.

Perhaps then, you are alluding to a lack of independent units with interfaces? For example Python code to achieve a given job is what's much smaller than VBS code simply because so many packages exist having already implemented the component jobs in a sense.

Anyhow, my point is simply, I'm not sure what you mean by monolithic I guess.

Collapse
 
noriller profile image
Bruno Noriller

I see your point, but this is probably more a problem on how I wrote it.
The 1500 lines was an example on the difference of a single huge file vs multiple smaller ones.

In the case, let's say you have something about people (customers, employees...), then you have about products (prices, stock, transport).
You could have everything in a "single file", but it would make more sense to split it in multiple files and folders.

My point is that even separated on folders is maybe not enough.

It's not just about organization or the lack of it, the cognitive load of a huge application with everything in one place vs multiple smaller projects each one tackling one of those things is different.

Collapse
 
siy profile image
Sergiy Yevtushenko • Edited

Looks like you're using word "monolith" to call all code which you don't like (for whatever reason).

Just keep in mind, that "monolith" and "microservices" could be a different packaging options for same application. With this in mind, you'll notice, that all problems which you've mentiond do not belong to "monolith", it's matter of project organization. Whole system can't be good or bad depending on how many deployable artifacts you generated.

Another (tightly related) issue: design patterns used in microservices inherently enforcing bad practices and making code harder to support and maintain. Circuit breaker and Saga patterns, for example, enforce mixing business logic and low level details (connectivity issues in case of circuit breaker or detailed transaction/rollback management in case of Saga pattern).

Collapse
 
jwp profile image
John Peters

Agreed, they are a bad idea. Reason? Visual Studio Code has 'Go to Definition' which is file agnostic. Too many times in Javascript code, I've seen mixing of concerns. When that happens scrolling over all of creation and word searches become untenable. Besides, in Javascript classes are first class citizens.

Collapse
 
guillep2k profile image
Guillermo Prandi

I'm all for long source code files (even 1500 lines) if the alternative is splitting the code with such granularity that trying to figure out what ONE function does involves navigating dozens of files. Code review becomes a nightmare. This is rarely the case in front end projects, and less of a nightmare nowadays with good editors like VSCode and such, but depending on the case, long files are justified. For example, device drivers (for microcontrollers, like UART, SPI, etc) tend to be contained within one file.

Collapse
 
noriller profile image
Bruno Noriller

I partially agree with you, because as with anything... it depends.
Maybe the best way could be a 1500+ lines file, but more often than not... 1500+ lines is too much for one single file.

Collapse
 
noriller profile image
Bruno Noriller

While I agree that it can modular and easy to upgrade... what if it's AngularJS? Or the next AngularJS?
It can have an awesome structure, but people will still be like... I have to work with that?
From what I understand, ROR developers are harder and harder to find... because it's old, as good as it can be, the bottleneck becomes the developers... more than any technical side.

Collapse
 
rubyshiv profile image
Ruby Valappil

Nice Article!

A few things that I would differ on,

Microservices and Monoliths should be a choice made based on the project requirements, one need not declare the end of the other.

Microservices are suppose to be fast in dev and deployment but we have been doing that with monoliths for years. CI/CD existed before devops just like modularity existed before microservices.

Even with microservices, if the service is dependent on other frameworks like kafka etc, it gets difficult to upgrade if the frameworks don't support that upgrade.

Biggest advantage that I could think of from my experience is convincing the release and business team. They are much happy when a small chunk is moving to production instead of the entire application, it's easy to get approval and needs less discussions within multiple teams.

Collapse
 
noriller profile image
Bruno Noriller

But you see, the people side should be a factor, and a big one at that, of the project requirements.
Will you have people willing (and happy) to work on the the code base (with language, framework and other quirks as is today) in 5 years?
Adonis V4 and V5 have completely different developer experiences but just a few years apart. And in a few more, the gap will only get bigger.

Collapse
 
joshghent profile image
Josh Ghent

I sort of disagree with most points.
You can have large files in microservices as well. And 1 service to update is fair better than multiple.

Collapse
 
noriller profile image
Bruno Noriller

I understand what you saying, but my point is for how long updating one is better than multiple?
Today a new framework that will dominate a great share will start, tomorrow a new version of another one will be released and yesterday we were still using something that was released years ago and becaming harder and harder to maintain, not because it's bad, stale or anything like that... but because people just don't want to learn it anymore.