DEV Community

Cover image for Is it Time to go Back to the Monolith?

Is it Time to go Back to the Monolith?

Shai Almog on February 14, 2023

History repeats itself. Everything old is new again and I’ve been around long enough to see ideas discarded, rediscovered and return triumphantly t...
Collapse
 
jkettmann profile image
Johannes Kettmann

Great read. I'm curious about the reasons why you say this:

Python and JavaScript. These two languages are great for small applications. Not so great for larger ones.

Collapse
 
codenameone profile image
Shai Almog

They both have great tools to start out with very little code. They both allow for untyped development which works well when starting a new project. But as you hit the 10k or 100k lines of code, the complexities shift.

E.g. You can no longer hold the project in your head and need both order and team discipline. It's harder to search the code for what you need since the code has higher density. Every line does more things and there's more implicit behavior.

While the Java API and ecosystem are huge. The language itself is relatively small and strict. Both of these can be painful for smaller projects. But they are a huge boon when the lines of code rise. E.g. look at the following code:

var z = x + y;
Enter fullscreen mode Exit fullscreen mode

Ignore for a second that I can instantly see the types and everything in the IDE. I can already make several assumptions here that I might not be able to make in other languages:

  • The operation either concats a String or adds two numbers of the same type
  • z will be of the same type as x or y
  • All variables are declared in this method or class. In rare occasion in the base class of which there can be only one

This might not seem like a huge deal when looking at a single line of code and it isn't. But there's a compounding impact that increases as the number of lines increase.

Collapse
 
lexlohr profile image
Alex Lohr

That's why at least in the JavaScript world, static typing (e.g. using TypeScript, Flow or Hegel) and mono-repos with well-organized smaller packages are prevalent in larger projects.

There's no inherent shortcoming in these languages preventing you from using them for large-scale applications. Also, as Atwood's law states: "Any application that can be written in JavaScript, will eventually be written in JavaScript."

Thread Thread
 
codenameone profile image
Shai Almog

Yep. TypeScript is great and a big improvement over JS. But here the surrounding environment is still a bit limiting. E.g. modularity and isolation aren't as strict as they are in the JVM world. The general enterprise infrastructure is also far more challenging.

If you compare Spring Boot to NodeJS the difference is stark. Spring is far more vast and elaborate, it provides more facilities for isolating and distributing application components. The world on top of Node doesn't have these concepts as far as I know.

Thread Thread
 
lexlohr profile image
Alex Lohr

Spring Boot is a web framework for Java, NodeJS is a platform to run JS on the server. You're comparing apples to oranges here. Also, NodeJS is fully capable of isolation and distribution using threads and sockets - and abstractions like cloudflare workers make them simple to use.

Thread Thread
 
codenameone profile image
Shai Almog

Spring Boot is a HUGE platform in which the web is a small optional part.

Workers are super cool but that's not what I'm talking about. I'm talking about bean scopes, IoC, dependency configurations, etc.

Thread Thread
 
lexlohr profile image
Alex Lohr

Not sure about those "bean scopes", but it looks a lot like seperated contexts, wrapped in decorators for good measure. Inversion of control and dependency configurations we certainly have.

Thread Thread
 
codenameone profile image
Shai Almog

No. It's complete management of state and those proxies include tremendous hidden power of declarative computing. Everything from transactions, isolation, role based security, retries, etc. can be implemented decoratively thanks to those proxies.

Sorry I wasn't clear about the IoC. I meant the breadth and scope of the implementation in Spring. The pointcuts, the context and constant injection. There's a level of details that's fantastic here. An admin can override injected values from over a dozen (if I recall correctly) sources. This is well documented including the priorities.

Notice this isn't a slight against node. Spring is the 800 pound gorilla that took the Java EE features to the next level. I think Node chose to go in the exact opposite direction. Especially due to its asynchronous nature that made a lot of these features impractical.

Thread Thread
 
stojakovic99 profile image
Nikola Stojaković

You’re right. People try to show TypeScript as an ultimate solution for all the issues JavaScript has compared to other solutions when building enterprise apps. While I love TypeScript, I understand it’s shortcomings (or more precisely, shortcomings of JavaScript ecosystem). Also, stating how everything that can be written in JavaScript will eventually be written in JavaScript is a reason to worry.

Collapse
 
corentinbettiol profile image
Corentin Bettiol

Python can still be useful for big projects (Django still powers Instagram for example).

The rigor brought by using a framework and some other tools (black for formatting, mypy for typechecking) may be a key.

Thread Thread
 
codenameone profile image
Shai Almog

Sure. It's a great programming language. It's just easier to write many smaller apps with it instead of one monolith.

Collapse
 
chasm profile image
Charles F. Munat • Edited

And this is precisely the reason that no project should ever have 100k lines of code. Staying under 10k would be even better.

The goal in many organizations appears to be to bulk up the code base as much as possible. The maximum amount of code in any given application should be the amount that can be held in one (1) developer's head at once. As soon as you have to start using swap space, you're in trouble.

The solution is micro-apps, each owned by a single developer. Then communication between those apps using the actor model. Distributed apps, essentially, even if only on the same OS. Once your "app" is a set of black boxes, who cares where they run?

This also permits an "ownership culture" where devs own their own code and no one else touches it (though others might review it).

It wouldn't hurt to eliminate also the at least 50% of code and features that are gratuitous – that no one needs or wants, and that not only bulk up the code and make it incomprehensible, but bloat the interface as well, harming UX. And virtually every "enterprise" app is filled to overflowing with this crap, if we're honest.

Frankly, this was the initial promise of OOP, unfortunately utterly abandoned in practice.

Thread Thread
 
codenameone profile image
Shai Almog

If you read the post you know my answer. This didn't solve the problem. This punted the problem to a new location. Business applications are large and complex, conceptually. So now we have to manage 100+ apps. Each built by a different developer that hopefully did a good job. Then we have to manage all the interconnect, scale and deployment.

You didn't remove complexity. You just moved it to a different place which guess what: increases your cloud costs while reducing performance.

Worse. What if you need to fix a vertical feature? A new regulation that comes in that needs to go across the board? Now I need to go to 30 different microservices and hold all of them in my head one by one?

A monolith might have 1M+ lines of code. But I can do the same thing within the IDE without knowing all the code. I can invoke a method without a network interface and without a circuit breaker. The deployment is trivial.

I don't know about the types of projects you deployed. I can tell you that from speaking to companies over the past few years. It seems microservices made things much harder for all of them.

Thread Thread
 
chasm profile image
Charles F. Munat

Actually, I stopped at your clickbait title.

If you read my response, then you know my answer. I wasn't only recommending an actor model approach. I said, clearly (or so I thought) that most enterprise apps are needlessly, gratuitously, overly complex and feature bloated. I said that I thought that we could reduce that code by at least 50%.

I was being nice. The vast majority of apps out there solve no problems, are needed by no one, and simply waste resources and developer time. They exist because our economic system demands that we churn out more and more "stuff" thoughtlessly and pointlessly.

Suggest that we cut the number of lines of code in half industry-wide and the response you'll get is "what about our jobs?" So the need that the code actually fulfills is employing devs and concentrating capital as we burn through the last of our planet's resources.

It is funny how many people who propose solutions and approaches never seem to consider simply building less. It's always about our insatiable need to build more, faster. Yeah, pay no attention to the man behind the curtain.

I said nothing about microservices, so that's a straw man. And your comment about 1M+ lines of code and using the IDE is actually making my argument: the code outside of your micro-app can essentially be viewed as balck boxes. You only need to know what API to call. How is a set of micro-apps any different IDE-wise?

Most absurd is your comment that we have to manage 100+ apps each built by a different developer that "hopefully did a good job". Mixing that code up and having devs working all over the place and stepping on each other's code will somehow make better devs or make them do a better job or will make it easier to spot the devs who aren't doing a good job? Please.

And bounded code blocks is worse than smashing those apps into pieces and then stirring them all together so that devs are working on the same code and are all over the place? You still have the same number of devs and lines of code, but now they are essentially spaghetti. I fail to see how my suggestion makes things worse.

As for cloud costs, you don't even understand my argument and already you are rushing to deployment. I'm talking strategy and you are arguing tactics. "The cloud" is just another panacea: a shibboleth. Assumed to be the answer without really questioning why.

Essentially, you just repeat the same nonsense that enterprise devs spout whenever they are challenged. Do you actually have any new ideas, or are you just adding to the background noise?

As for my experience, I have plenty, but that's the biggest straw man/red herring of all. Ideas either work or they don't – on their own merits. In my "experience", those with the most experience are often the least willing to consider new ideas. Not sure how that's an advantage.

It sounds to me like you don't actually think there is a problem at all. So what was the point of your article again? What, precisely, were you trying to solve?

Thread Thread
 
codenameone profile image
Shai Almog

I understand the fatigue that comes with the repeating trend cycle, I've been in this industry for many decades and get that. However, if you're unwilling to have your opinions challenged then you're just using traffic to my post to shout your opinion. You have your right to that but I don't think that's a good argument.

Had you taken the time to read about the modular monolith you would have learned that you can split a monolith and get most of the benefits of microservices while still retaining the benefits of microservices.

Having worked at banks, telecos and large startups. It's just impossible to write less. In fact we tried moving to microservices and ended up writing a lot more. Every microservice within the environment needed logic that ended up replicated all over. This is inevitable due to the inter-dependencies.

Managing the production environment was a nightmare. Guaranteeing that eventual consistency will be reached isn't an option for a bank... Imagine if a user and his spouse withdraw a sum in two branches at once. Some problems are just inherently big.

Thread Thread
 
chasm profile image
Charles F. Munat • Edited

It's clear that you don't want to engage with your readers. I guess you think the comment section is for people to sing your praises.

I haven't shouted anything, let alone my opinion. That you say that I'm unwilling to have my opinions challenged says more about you than me. I responded to your arguments – the few that actually addressed mine. You create straw men instead.

Case in point: you continue to talk about monolith vs. microservices as if I had come out in favor of either one. But I proposed a third path, one that hasn't yet been tried anywhere I've seen, but which you reject out of hand without offering real argument other than that you've managed to stay in this field for multiples of ten years and you don't like it.

It is pretty clear that your world is black and white: it is either a monolith or a microservice. Nothing else is possible.

And you provide zero evidence or even a good argument to support your position. Take, for example, your comment that with banks, telcos, and large startups it is "just impossible to write less [code]". It is difficult for me to imagine a more ridiculous statement. Do you proofread before you post?

Anyone reading this who spent even one day coding in a bank or telco or large startup must be rolling on the floor laughing. Are you for real? I guess you only worked in banks, telcos. and large startups whose code was perfectly optimized and contained zero tech debt. Oh, please. Name names! Everyone should know about these amazing organizations.

I doubt seriously, though, that that generalizes to your typical bank. In my own personal experience, FWIW, with banks, utilities, academia, small and large businesses, social media, and more I have yet to find a code base that wasn't enormously inefficient, overly complex, and loaded with tech debt (about which the devs bitched ceaselessly).

I'll leave it to anyone with the stomach to read this far to decide for themselves. Maybe others can post examples of zero-waste code bases. But I get it: microservices are too hard for you and you hate them. I'm guessing that you did them wrong. Why not just be honest about it in your next article?

Feel free to misrepresent my comments in yet another snarky reply. I'm bored with this. Unless you have some actual evidence to support your views, I'll look for greener pastures.

Thread Thread
 
codenameone profile image
Shai Almog

You haven't read the article... I literally discussed a 3rd way. That is featured in the cover image of the article (although it didn't fit in the so called "clickbait title" and dev.to has no subtitle concept).

There is no "evidence" to support architecture choice. It's tradeoffs and experience. But if you haven't read the post how the hell do you know what I claimed in it?

Read my bio. I worked in multiple banks, telecos, fortune 100s and startups over the past several decades.

I think we have a short circuit in communication. I don't think I was snarky. I'm amazed you would call my arguments straw-man arguments when you completely ignored the substance of my post.

Collapse
 
mbanda1 profile image
Nixon

Should have been these languages are famous for building loosely coupled system unlike tight ones.

Collapse
 
chawax profile image
Olivier THIERRY

A Node.js framework such as NestJS gives developers a developer experience that is close to the one of Spring.

Collapse
 
mindplay profile image
Rasmus Schultz

Use microservice patterns only for features where you know the extra investment is necessary for scalability - it's usually a few, isolated features and endpoints.

Use monolithic patterns for things like back-office solutions and admin pages, where you know the number of users won't grow beyond the predictable.

There is no reason to constrain yourself to exclusively monolithic or microservice patterns - apply the design that makes sense for the problem you're trying to solve.

But yes, default to monolithic patterns - it's cheaper and simpler. If your monolith is well designed, it usually isn't too difficult to extract that one isolated feature or endpoint to a microservice, if that becomes necessary.

Collapse
 
codenameone profile image
Shai Almog

I mostly agree but I'm not 100% sold on the scalability advantage of Microservices. I think it's over-hyped without proof.

Collapse
 
mindplay profile image
Rasmus Schultz

Oh, it's definitely over hyped.

But things like NetFlix, Amazon and Twitter run, and run reliably - these could not have been realized with a monolith, I don't think. At least, I don't see how. So that's "proof", at least in the "the proof is in the pudding" sense.

Also, talking about scale, we're really talking about 3 different things:

  1. Scaling in terms of compute: that's the obvious one.
  2. Projects that need to scale in terms of complexity: if you have two extremely complex subsystems, and the integration between those is relatively simple, I think microservices can offer some beneficial separation there.
  3. Businesses that need to scale in terms of team size: if you have to produce an extreme amount of software, you might have to scale the production by letting individual teams own individual subsystems. (and hopefully this coincides with #2, meaning those subsystems have relatively simple integration points - otherwise it's likely to go horribly wrong.)

That said, almost no projects require those levels of scale.

Thread Thread
 
codenameone profile image
Shai Almog

Facebook and twitter used to be monoliths, but as I said, when you reach that size things change. They spend a lot on these microservices and have huge dedicated OPS/SRE teams to run them.

I think if modular monolith was an option when all of these companies picked microservices, they might have picked that option.

The module approach supports scaling the teams easily since these become separate small projects. The compute scaling is the main thing I had a problem with. I think that dollar for dollar, scaling a modular monolith will be much cheaper if only due to reduction in observability costs.

Thread Thread
 
mindplay profile image
Rasmus Schultz

Yeah, it's expensive and complex, I'm not denying that - I think there are very few cases where it's justified, but I don't like to exclude any options. Your compute bill isn't the only factor.

Collapse
 
cheetah100 profile image
Peter Harrison

The main problem with a traditional monolith is that it builds in the data model and business rules and tightly couples them to capabilities such as storage, integrations and messaging. Micro-services have had a similar issue, in that while they separate concerns they do so by segregation of the domain, rather than based on the capabilities. There is another approach, which is to expel the domain from code and store it in the DB or configuration. Build or buy general capabilities, and have each one handle only those specific responsibilities.

dev.to/cheetah100/capability-drive...

Collapse
 
codenameone profile image
Shai Almog

Interesting.

Isn't that back to the n-tier layered approach?

You gave a bit of a simplistic example with 2 nodes. Can you give a more realistic example here?

How does that work with authorization and authentication?

Collapse
 
cheetah100 profile image
Peter Harrison

I'll give an example of a live environment I'm running now. There is a SpringBoot system which exposes data access API. The domain is a runtime artifact, much like a DB schema. This means you can dynamically add additional tables, fields and constraints at runtime. Just like a DB once you add it the API is ready to roll, along with UI elements that allow you to perform CRUD operations.

Security is built in by default. When you create a table you are assigned as the owner, but you can the assign access rights to others, either directly to users or to entire teams. This mechanism uses Spring Security with custom authorizors. Simple security is READ/WRITE/ADMIN, but there are also more advanced options to control what fields a team can see, or limit the records they can see based on some constraint.

You can specify relational constraints, reflecting DB foreign key constraints, and the API will check the constraints before saving. Only thing is that this system can connect to multiple databases at the same time, so tables may not exist in the same database. It is possible to set up constraints across them.

The APIs validate the data, ensuring it is the right type, that key constraints are checked, and that custom business rules are checked.

Business rules run on every modification of data, on an event basis. Business rules can trigger a wide range of plugins to perform specific actions. This includes running Python or Javascript scripts which can be uploaded at runtime, communicating via other systems via a range of plugins, transform data using transformer plugins and so on. At one point we also had the user web user interface designed using runtime tools.

It was designed from the outset to be scalable, and uses messaging to distribute rule execution throughout the cluster.

Now, there are three services participating; the Spring App, the MongoDB Server, and the JMS Queue Server. Each is set up as a cluster for high availability. Each can be scaled up independently. The Spring App is kind of like a roll your own Lambda solution, in that the dynamically specified Python or Javascript run on any node of Spring App cluster. All Spring App nodes service API calls as well as process events.

It is monolithic in the sense that it looks similar in structure to the bad old days, rather than a complex network of micro-services, but it is different because it expels the domain, and in the process the complexity, into runtime configuration stored in the DB.

dev.to/cheetah100/micro-nightmares...

Thread Thread
 
codenameone profile image
Shai Almog

Thanks for the detailed explanation. If I understand correctly this database layer sits on top of something like MongoDB. Right?

Isn't that a replication of what a database engine offers already when combined with a good caching layer?

How does the team division work?

Do you have teams for every "tier" or vertical teams that edit the entire thing?

Thread Thread
 
cheetah100 profile image
Peter Harrison

It sits on MongoDB, but there is an abstraction layer which allows us to use JDBC as well. It can even download an existing schema and generate the model config. We had separate subteams for UI and back end. Databases don't expose data via REST, nor do they implement the flexible granular permissions we need. It also simplifies the process of querying data. We have BAS configuring the actual customer solution. Similar to a low code solution.

Thread Thread
 
codenameone profile image
Shai Almog

Thanks for taking the time to answer. Interesting read.

Collapse
 
brianmcbride profile image
Brian McBride • Edited

In recent years SQL has made a tremendous comeback from the dead

Why do you think this is? I'll hazard a guess - ORMs. Many people are looking to go fast, and Springboot, Prisma, .NET, and so many other frameworks with ORMs have nice and easy-to-use legacy ORMs that are only worth with SQL and maybe MongoDB.

jQuery and PHP are still used in massive amounts. Are they the right tool for the job? Rarely. Are they the easiest? Probably. WordPress is super easy to start.

This is what I run across a lot with Springboot and .NET developers. The frameworks encourage a monolithic approach. Why learn a new DB query interface when you can keep using the ORM? That's why SQL is so strong.

To be fair, SQL is fine. As a DB language, great. The idea of picking up the correct database for the job is the key here. With the cloud, we have so many great choices that are fully managed. Pick the correct DB, don't shove everything into one. Once you are performing complex joins and running stored proceedures - your future will have scaling issues (assuming you scale)

As for the rest of the code. Follow domain-driven design. Build clustered services in your domains. If you HAVE to, you can gang them all into one service. But why? It is so easy to deploy to fully managed serverless platforms. Pay for what you use. I think I have a few thousand experimental services on GCP Cloud Run (fully managed Kubernetes), and I have a near-zero cloud bill. My production services on any of the cloud providers running serverless have a fraction of the cost of ownership it is to keep a HA server up and going.

It is so easy to deploy to these services, so easy to use an API gateway and leverage the tools the cloud providers offer out of the box.

Also, when you deploy a service that has only a few endpoints and each one of those are highly functional in nature - testing is way easier. You can manage SLAs for each service based on it's needs.

On the other hand, the internet was built on monoliths.

Our society was built on bloody crusades and slavery, and to this day, it is fed by unsustainable resources like coal. Building a monolith is fine, but there is a reason we have modern architecture. I know this analogy is extreme and over the top. The point, though is that just because we did it one way and it worked doesn't mean that it continues to be the way to do things moving forward.

Python and JavaScript. These two languages are great for small applications. Not so great for larger ones.

This is a fallacy. You would be correct in stating that Python can be slower for 0(n) calculations. I wouldn't build a game engine in it. JS running on Node is quite a bit faster. Of course, if you are using VSCode then you are using "Javascript" to write your other code. Or is VSCode a small application?

Then we have that issue. "Small"? Well, part of the whole idea behind a microservice is that you have an atomic piece of code that does one thing with minimal side effects (functional programming). I'd maintain that huge Java classes that try to do too much are far more of a problem than Python or Javascript. In microservices, Node has outperformed Java in CRUD applications too.

In recent surveys, Typescript remains one of the fastest-adopted languages and continues to have a high satisfaction score. That is impressive as typically, the more adopted a technology becomes, the lower the satisfaction.

No matter what language you use, you can build enterprise systems of any scale. I think there are various returns on time investments. All things considered, I've seen Node/Typescript developers create services that are fully tested and deployed through CI/CD and meet the required SLAs in less development time than other tech stack choices. This is mainly attributed to the open-source community's sheer number of available libraries to bootstrap an application. Obviously, a skilled GoLang developer will code faster in Go than in Typescript. I happen to be in a unique position to see many companies execute many projects and have seen the final budgets on the completed work.

Collapse
 
codenameone profile image
Shai Almog

Interesting.

This is what I run across a lot with Springboot and .NET developers. The frameworks encourage a monolithic approach. Why learn a new DB query interface when you can keep using the ORM? That's why SQL is so strong.

I have a few thoughts on this. First off, most no-SQL is MUCH easier to use than any SQL database even with JPA. Mongo is trivial. SQL DBs hide a lot of complexity under the surface so simplicity is often misleading.

I've seen a lot of no-SQL projects go sideways. People enjoy setting up a free-form schema and find out they can't query it afterwards. But this is just half the story. DBs like QuestDB solved the SQL performance problem. You can get a high performance timeseries DB and still have proper SQL.

Not every technology is worth saving. Wordpress should die. I doubt it will. But I can't stand it myself.

It is so easy to deploy to fully managed serverless platforms. Pay for what you use. I think I have a few thousand experimental services on GCP Cloud Run (fully managed Kubernetes), and I have a near-zero cloud bill.

This is surprising to me. Do they get no traffic?

I had a similarly low bill on Googles cloud then it jumped to a huge number overnight. It took me a couple of days to notice as I wasn't glued to the dashboard. They had no decent tools to analyze the costs and our startup nearly went bankrupt with zero support from Google.

Also, how do you keep track of so many experiments? Do you think that's practical?

How do you debug inter-dependencies?

If something organizational changes (regulatory requirements etc.) how do you make a universal change?

Also, when you deploy a service that has only a few endpoints and each one of those are highly functional in nature - testing is way easier. You can manage SLAs for each service based on it's needs.

This I strongly disagree with. Yes. You have perfect tests... Great. But the interconnect between the services becomes impossible to test outside of production.

Integration tests are the most important tests there are, since you don't have a reasonable way to simulate proper production you cant do integration tests and things fail.

Yes if your functions 100% follow functional programming paradigms everything should work in theory. But your entire stack isn't a theoretical lab. Failure is unexpected by definition and cascading bugs are the worst bugs.

Python and JavaScript. These two languages are great for small applications. Not so great for larger ones.

This is a fallacy. You would be correct in stating that Python can be slower for 0(n) calculations.

This wasn't about performance. This is about managing types and tools within the language/platform that make it work better for larger projects.

This isn't an attack against any of those languages. Each language has a domain to which it was optimized. These languages are better optimized for writing code faster and smaller code. That's fine and it's an advantage. But everything has a tradeoff.

Indeed Typescript helped JavaScript a lot and made larger projects more practical. Notice I specifically said JavaScript and NOT Type Script. But it's still not at the scales of Java in terms of ease in building a monolith and there are a few reasons:

  • TS Developers don't want to - it's a cultural thing
  • Verbosity - when you have a lot of code boilerplate makes navigating the code easier. You have hooks that make it easier to find elements in the code
  • Strictness - Java is way more strict than TS
  • Underlying system - TS eventually sits on top of JS which is a leaky abstraction
  • Dependencies - JS/TS dependency system is a mess. As you scale it becomes a bigger problem
  • Asynchronous programming makes it harder and makes large projects less beneficial
  • Lack of scale features - proper modularity, scoping...
Collapse
 
brianmcbride profile image
Brian McBride

People enjoy setting up a free-form schema and find out they can't query it afterwards.

I don't understand this. If you don't know your RDMS schema, you can't query it. NoSQL doesn't mean you don't define a schema. Just like if you create an API, it doesn't mean you don't define an API spec. It doesn't matter what database you use; at some point, you have to do data architecture.

My job is to "modernize" platforms. So I get to see a lot. I will say that the "schemaless" aspect of NoSQL has never been an issue with my clients.

SQL itself is just a query language. You can find variations of it in NoSQL databases too. Take Azure Cosmos. That is a super cool database that is globally consistent and hyper scales super well. It is NoSQL with a SQL interface. ArangoDB is a graph database that has a version of SQL (with graph traversals and a few other things added). SQL itself as a query language is fine.

DBs like QuestDB solved the SQL performance problem

RDMS performance isn't the issue. It is "right database for the job." There are plenty of high-performance RDMS. You will pay more to have one than you will, something like DynamoDB or DocumentDB.

I had a similarly low bill on Googles cloud then it jumped to a huge number overnight

Not unique to GCP, as this can happen in ANY cloud provider. You need to set billing alerts if you are on a budget. A recursive service you could have tested better and deployed could cost a lot. If you end up with a suddenly high bill due to abuse, contact the provider. They will almost always credit you back. I've seen Google credit bad developer mistakes too (as long as it is not a repeat offender.)

In general, though, serverless is super cost-effective to own. And to answer your question, my experiments don't get a lot of traffic. When a project moves into having actual traffic, it goes into a proper project. I will say that I've had GCP Cloud Run + Firestore services running pretty simple CRUD methods handling around 5mil requests a day, keeping an average 53ms response time with a $255/month bill. That was a NodeJS project. Node handles concurrency surprisingly well and if you understand how to stream your data through node, you can gain even more performance.

This I strongly disagree with. Yes. You have perfect tests... Great. But the interconnect between the services becomes impossible to test outside of production.

I 100% agree that loosely coupled services require more discipline. I also disagree with your assessment, though. If you use Stripe's API for payment processing, do you need access to their codebase? No, you just need to know the API contract. It is kind of them if they offer a dev or mock endpoint, but even then, with the API contract you can mock your own source.

When I develop a microservice, if I know those endpoints are fully tested and they meet the SLAs needed by their consumers - then I can feel assured that that atomic part is operational. Say it is a service to get product pricing. It returns pricing, and if it works, it works. I can offer my team a mock endpoint if that helps them develop. Ideally, I'd offer that as something they can run locally on their workstation too. But no matter what, I follow semantic versioning and everyone can feel assured that the pricing API works. That is far better than having them read pricing from a shared database. What happens if I change the pricing schema? EVERYONE has to go and change their code.
Conversely, if I change the pricing API endpoint, I release a v2 and socialize the depreciation of v1. I can use monitoring to see who is still using v1 and assist those teams with their migration if needed. Once v1 is no longer used, I can kill the service.

Now, here is the thing. In my pricing "domain" I might decide to not just have a database but to put a huge machine learning platform in place that calculates the best pricepoint and discounts to optimize revenue. Maybe that requires that I completely overhaul the way I store and access pricing data. All that is abstracted away behind the "public" pricing APIs. I can iterate and develop without breaking "the monolith" in this service approach.

Failure is unexpected by definition and cascading bugs are the worst bugs.

There will always be bugs. If you write 100% unit test coverage on your code, you will find most of them. And if your teams screw up and break an API, event or other data contract then yeah - life sucks. The tooling is strong now for tracing problems in a distributed system. That said, I can honestly say that it just isn't an issue. I work with so many large enterprises, and cascading failures in a modern microservices pattern are not an issue. It is the cascading failures in the monolith that are common.

This is about managing types and tools within the language/platform that make it work better for larger projects.

I can agree here. I use tools that make distributed architecture so much easier. Deploying multiple containers is just as easy as deploying a single monolith. If anything, it is easier because, by proper practice, each part is atomic and can be adjusted, knowing I won't break the whole.

I'm not trying to be a jerk here to argue your points. Your last bullets though...

TS Developers don't want to - it's a cultural thing

OMG. I wish. Typescript developers will often grab things like NextJS, GraphQL, maybe TRPC, etc. and these are cool technologies. They all encourage monolithic patterns, sometimes by accident. At least there is some awareness like GraphQL Federations.

Verbosity - when you have a lot of code boilerplate makes navigating the code easier. You have hooks that make it easier to find elements in the code

I don't understand this. Java has wayyyy to much boilerplate which is partly why we have Springboot to abstract that away. I'm not sure code navigation is much of a problem in any language though. Well, lol, PHP can die a death.

Java is way more strict than TS
TS eventually sits on top of JS which is a leaky abstraction

Sure, I've never had a NullPointerException in Java, lol. Typescript can be set to be pretty strict. And both are compiler type checking too, so if something does get missed, it leaks. But yes, Java is more strict. And many TS developers will turn off the type guards and cheat.

JS/TS dependency system is a mess. As you scale it becomes a bigger problem

Not sure I agree with this either. If you are talking about in-code dependency injection. Well, it just works differently. Modules vs DI. They both have pros and cons.

If you are talking NPM vs Maven/Gradle. Well, I have had some serious painful JAR conflicts in my past Android development. So painful. In the node world, there are billions of NPM packages, and some are stupid crap. There is more drama in the NPM community because there are so many packages downloaded millions of times a month. But hey, Java had a nice time in the sun again with Log4j. Honestly, I have yet to meet a dependency package manager that I love. If we go talk to a GoLang or Ruby developer, they will tell us why theirs is so much better too.

Asynchronous programming makes it harder and makes large projects less beneficial

I don't understand this comment at all. If you are talking about async in NodeJS? It is straightforward these days. The years of callback hell are long gone. Promises and async/await are super easy. I don't understand the large projects though.

Lack of scale features - proper modularity, scoping...

This is just lack of awareness. Node services can scale, modules and modularity is super easy, and there are tons of scoping and monitoring tools. The reality is, the combined Javascript and Typescript languages are the most used in the world by a large margin. Obviously, that is because of the browser. But we are not getting rid of the browser, and Google demonstrated with Dart that developers don't want a better language than JS in the browser.

Final thoughts

I want to be clear here. Java is totally fine. There is a lot of history and a lot of good code in Java. Typescript and Javascript aren't going away anytime soon. They are both changing and growing fast, though, shockingly so. Just look at both languages from five years ago through today.

I've seen enterprises running on C#/.NET, Node, Java, GoLang, Pyhon, Ruby, and many more. All running great. I will say that TS/Node developers, Go and Rust developers, and Kotlin developers are way more likely to adopt new patterns. That is anecdotical.

And while this opinion might change tomorrow as tech changes, right now, the developers who produce the highest quality of code in the shortest period of time are the full-stack Typescript/Node developers that I've worked with. If I need a high ROI on my development investment, then NodeJS is my pick. If I have a legacy Java team, I'm not going to ask them to learn Node, though. However, I will challenge them if they are "too legacy" to learn more modern practices. For example, I'll have Java devs learn RxJava if they have no exposure. It teaches a lot of great concepts that a lot of Java devs don't typically see (streams, functional programming, etc.)

Finally, I appreciate your detailed and thoughtful response. Even when we don't agree, I love when someone can articulate what they experience well. Nothing is better than learning something new or being "convinced otherwise." ;)

Thread Thread
 
codenameone profile image
Shai Almog

I don't understand this. If you don't know your RDMS schema, you can't query it. NoSQL doesn't mean you don't define a schema. Just like if you create an API, it doesn't mean you don't define an API spec. It doesn't matter what database you use; at some point, you have to do data architecture.

Right, sorry I wasn't clear. But a lot of the schemas are more flexible especially in document based systems but also in other no-SQL approaches like bigtable, etc. You can create queries but the more complex queries require far more setting up than other approaches.

SQL itself is just a query language. You can find variations of it in NoSQL databases too.

Right, I probably should have used the term relational DBs. This is a newer perception in the non-relational camp. They literally called themselves NoSQL to begin with. But the type of SQL they support is often very limited. The main benefit of relational DBs comes from the BI level insights you can get, almost for free. When I dealt with some non-relational data we had to jump through hoops to get the information we wanted.

There are tools (bigquery etc.) but those have their own costs and complexities.

Obviously, not saying that relational DBs is a panacea. Just saying that it's mature and has many benefits we tend to forget.

RDMS performance isn't the issue. It is "right database for the job." There are plenty of high-performance RDMS. You will pay more to have one than you will, something like DynamoDB or DocumentDB.

In many cases the write performance and scale were big selling factors for these types of databases. I get that some jobs make more sense for a document based system. Not to mention spatial data, etc.

Not unique to GCP, as this can happen in ANY cloud provider. You need to set billing alerts if you are on a budget.

Sure, but I really hold a grudge against Google since they pulled that stunt. They didn't have decent billing alerts back then and only had the option to cut off billing completely. Take the site down was the only option. They gave no support.

Some things improved but to this day searching for AWS billing nightmares comes up with a lot of horror stories. I hope this doesn't happen to you. But I feel that this is giving an open check to Amazon. I want to know my monthly bill. I'd rather have scale alerts and rush to add VM instances than have billing alerts and lose money. But that's a personal preference biased by trauma.

If you use Stripe's API for payment processing, do you need access to their codebase? No, you just need to know the API contract. It is kind of them if they offer a dev or mock endpoint, but even then, with the API contract you can mock your own source.

Sure. But Stripe worked a lot to give you that level of availability and scale. To do that for every service is a lot of work. I need to wrap everything with API gateways and circuit breakers. Use a discovery service and routing for availability. Store everything separately so the data related to that isn't easily queryable from a single location...

Let's say your billing is a microservice and your accounting is a separate microservice. Their data is in separate locations so to track the data of which invoices were payed by which form of payment you'd have to write code. Alternatively just use an SQL join when it's a single DB.

I get what you're saying about abstracting individual parts and not disrupting the monolith. 100%. But you'd eventually need to deploy it and then there will be some disruption. You can use a module approach with a monolith to get complete separation where a new version of the module would just "plugin into place". You can create staged roll-outs. You can do feature flags in a monolith. All these tools work just the same.

There will always be bugs. If you write 100% unit test coverage on your code, you will find most of them.

I don't find that to be the case. The more coverage I see the harder it is to write these tests and the result end up as tests verifying the bugs.

And if your teams screw up and break an API, event or other data contract then yeah - life sucks. The tooling is strong now for tracing problems in a distributed system.

I worked for a distributed debugging vendor. Yes. The tooling is better. But I still wouldn't want to reach production. These tools are also EXPENSIVE. Really expensive.

Observability is 30%+ of the cloud costs. If we can reduce the volume of observed data we can save so much. Not to mention saving the environment...

That said, I can honestly say that it just isn't an issue. I work with so many large enterprises, and cascading failures in a modern microservices pattern are not an issue. It is the cascading failures in the monolith that are common.

That hasn't been my experience but I'll chuck that up to personal biases on both ends. I think a lot of monoliths are old. When they fail people complain they have to fix a bug in the old system. I think we tend to notice it more since it's also a pain. Newer systems still have the developers around. They just fix the respective bugs. I think we'll get a clearer picture with time.

Verbosity - when you have a lot of code boilerplate makes navigating the code easier. You have hooks that make it easier to find elements in the code

I don't understand this. Java has wayyyy to much boilerplate which is partly why we have Springboot to abstract that away. I'm not sure code navigation is much of a problem in any language though. Well, lol, PHP can die a death.

The verbosity in modern Java is greatly reduced and closer to Typescript levels. But yes. Verbosity has disadvantages and also advantages when it comes to finding something. When I come to a 1M+ LoC project that I'm unfamiliar with I can find my bearings in it relatively easily thanks to verbosity. I see familiar structures, patterns and keywords. This makes the code easier to digest. I can also grep the code faster because I know the way it will work so I know how the code will look.

E.g. if a field is private I want to find code that changes it I can find the setField method via grep. I don't need to even wait for the IDE "Usage" indexing. I can find references to a specific package because everything is deeply hierarchical in Java. I can get scoping guarantees and enforce limits.

If you are talking NPM vs Maven/Gradle. Well, I have had some serious painful JAR conflicts in my past Android development.

Yes I was talking about these. Yes. Conflicts are terrible in Java too. This is no panacea anywhere. Although I don't care about Go/Ruby as the scales aren't close to JS/Java scales. When they have anything close to the number of libraries we have then they can proclaim superiority.

The big advantage maven has is in coarseness and strictness. JS packages are tiny. Everything gets packaged and you end up with a dependency graph that's immense. There are advantages of code reuse gone extreme. But it makes it very hard to understand what the hell is going on and why do I have 1000 dependencies.

Asynchronous programming makes it harder and makes large projects less beneficial

I don't understand this comment at all. If you are talking about async in NodeJS? It is straightforward these days. The years of callback hell are long gone. Promises and async/await are super easy. I don't understand the large projects though.

Async-await improved the way the code looks but the code is still running asynchronously. This creates subtle issues that get worse with scale.

With project loom Java has the scale of asynchronous code while using 100% synchronous code. That means I can step over in the debugger and everything happens sequentially one after the other. I can get a stack trace from production that includes everything I need.

This is just lack of awareness. Node services can scale, modules and modularity is super easy, and there are tons of scoping and monitoring tools.

Can a module be in a separate JAR?

Can it be segregated based on class/package?

TS modules are a language feature that only applies to the project. JPMS is a system that applies to the VM and runtime too.

I want to be clear here. Java is totally fine. There is a lot of history and a lot of good code in Java. Typescript and Javascript aren't going away anytime soon. They are both changing and growing fast, though, shockingly so. Just look at both languages from five years ago through today.

100% and I like Typescript as I mentioned before.

Finally, I appreciate your detailed and thoughtful response. Even when we don't agree, I love when someone can articulate what they experience well. Nothing is better than learning something new or being "convinced otherwise." ;)

Feelings mutual, very much enjoyed reading your thoughts on this. Thanks for taking the time!

Collapse
 
ravavyr profile image
Ravavyr

honestly, your last sentence hit on it.

The right choice is to build something "properly".
A monolith CAN work, and microservices CAN work too.
How well they work simply depends on what the developer(s) working on it know how to do. If their skills and experience are sufficient and the buget [time AND money] allow for it, they can build something good and it will last.

Nowadays though, few companies want to invest all of those resources properly so the vast majority of applications built are rushed and full of holes until "legal" comes around and then everyone panics and applies "accessibility" and "security" as a "ok we need to do this now".

And this is why EVERYTHING has already been hacked.

Collapse
 
bias profile image
Tobias Nickel

I described people it is ok to make a monolyth, you just make it more configurable and can deploy it like microservice.
lets say there is a service with rest api, there are queues/pubsub/streams processed, there is a realtime component. in dev you just start with all enabled.
but in prod you can have some machines/container only serve the api, others only process messages, others only handle realtime connections.

this can also be split on domain level, with a users module, a articles module,

roouting get setup via nginx/caddy or other api gateway.

this lead to less complexity than pure micro service but give lots of visibility into the different modules. and allow to scale them individually and even deploy individually.

so maybe there are not even dedicated frameworks needed.

Collapse
 
alexradzin profile image
Alexander Radzin

I was glad to find you article. Interesting that on February 14 I gave lecture "Monolith vs. Microservices" where I tried to explain similar things. Some people think that Monolyth==spaghetti code while Microservices==good, well designed etc code. I spent some time explaining why this is not always truth.

Collapse
 
lucabotti profile image
Luca Botti

Read your post. Agree that - mostly - transactional heavy workloads implemented with microservices add some complexity. But in any way, benefits of splitting complex monoliths and layered applications in smaller units deployed as microservices overcome any issue.

Recently I have been involved in a strictly layered application - felt like going back at the beginning of 2000 (EJB 1.1, so to say).

Also, looks like you have fallen in some of the traps of Microservices Development - eg, contract first development should be enforced if rest interfaces are used, no upfront decision in Orchestration / Choreography, monitoring etc.

Microservice is complex, and rather demanding to apply correctly, and lot of things are to be planned beforehand.

Collapse
 
cschliesser profile image
Charlie Schliesser

Great read.

"When we try to debug and replicate a distributed system, we might have an interim state that’s very hard to replicate locally or even understand fully from reviewing observability data."

Such pain, I know it.

"...the complexity of the application doesn’t go away in a microservice architecture. We just moved it to a different place. In my experience so far, this isn’t an improvement. We added many moving pieces into the mix and increased overall complexity. Returning to a smarter and simpler unified architecture makes more sense."

I refactored a decent-sized system over the past year to a more unified architecture. We've been able to introduce new features faster and safer, and write better documentation as a result. I think it's a happy medium between the single monolithic architectures of yesteryear and the whack-a-mole environment that microservices can be.

Collapse
 
mistval profile image
Randall

I wholeheartedly agree with this and have come to similar conclusions after working in complicated microservice architectures.

In my view, the main benefits of microservice architecture have little to do with how they're deployed, and more to do with how they facilitate creating clear and easily enforceable boundaries between modules, and being able to clearly assign ownership of microservices to teams or individual contributors.

But we can do the same thing in a monolith. It just takes more discipline and it has to be built into the architecture.

I have seen a monolith outgrow what one big PostgreSQL replica set was able to handle, and we ended up sharding, which is a fine solution. But it's also possible, like you say, to have each module use a different logical database, and do any necessary cross-module data joining in application code. These logical databases could all live in the same replica set, until they outgrow it, at which point the heaviest ones could be moved to their own cluster, with few, if any changes to application code. I've never tried this, but I'm interested in doing so, as doing it right would reinforce the boundaries between modules, though I'm aware of what's lost in terms of query-ability, enforce-ability of constraints, etc.

I have also worked with the opposite: a "microservice" architecture where all the microservices talked to the same database and queried and inserted into each other's tables. The only thing that wasn't shared was the code itself, which lived in a bunch of different lambda functions. It was... dumb.

I would also add that the "start with a modular monolith - break it up into separate deployment units when you need to - which will probably be never" idea is a key point of "Clean Architecture" by Uncle Bob, great book.

Collapse
 
johnn6tsm profile image
JohnN6TSM

Interesting idea.

For my recent project (in C#) I wrote an analyzer that plugs into the compiler and enforces architectural rules that I can specify using a DSL. The enforcement makes it easy to maintain the discipline of a modular monolith because it won't compile unless the architecture is respected.

It seems to me that good dependency management is needed for any project of even minimal size, and spouting of modules (or worse web services) is a heavyweight method to enforce dependency limitations. This lets me define and enforce an architecture explicitly and divide my code into modules using other criteria -- like which classes are likely to be used together.

I wrote a little more at: github.com/DrJohnMelville/Pdf/blob...

Collapse
 
codenameone profile image
Shai Almog

Interesting. Modulith enforcement rules might make sense?

Collapse
 
prernaweb profile image
Andrew

Microservices is an architecture that is over sold in my opinion. Consultants love to sell it as a silver bullet to all woes. Some of the smarter shops never left monolith and have not looked back like stack overflow blog.bytebytego.com/p/ep27-stack-o...
Lesson is, always start with a monolith, only look at microservices if there is a compelling need for it. Plus the operational overhead running K8s is a great reason not to go there.

Collapse
 
fyakubov profile image
Farrukh Yakubov • Edited

Great post. I've seen people to go for microservices for the sake of microcervies. I've seen people staying stubbornly on monolith even when their architecture is failing. The sweet spot is usually in between based on a given problem, if breaking up is required at all.

Collapse
 
thenickest profile image
TheNickest

I am happy to have read this. And I could't agree more on the dogmatism part. It's the pitfall of decision making.

Collapse
 
sproket profile image
sproket

With project Loom looming, Java is about to get green/virtual threads. The scaling is going to go through the roof.

Collapse
 
chasm profile image
Charles F. Munat

Short answer: no.

It is not time to go back -- or forward -- to anything.

It is long past time to start using the right tool for the job, rather than searching endlessly for the tech panacea. Let's take an "Ecclesiastical" approach instead. (See Ecclesiastes 3:1-8.)

Collapse
 
codenameone profile image
Shai Almog

Sure. It's always the "right tool for the right job". I get why Microservices worked great for Amazon... But saying that is a bit of a cop out...

There's a trend that's being pushed heavily and branded as "the right thing" by many vendors with vested interests. Yet it mostly enriches them while increasing costs significantly and a vast majority of the population will be better off with "just" improving their monolith. In that situation we need to make a stand and get the facts right.

E.g. the claim that monoliths don't scale doesn't pass the smell test. Yet everyone repeats it over and over and over...

Collapse
 
chasm profile image
Charles F. Munat

And in six months, they'll all be repeating that monoliths are the only way, and that microservices suck. That's called backlash. And it all goes in cycles. Are you just noticing this? It has been the case in tech for half a century.

People in tech always say that they get it: right tool for the right job. But no they don't. They just say it. They say it so that they can dismiss it and then get on with whatever atrocity they are committing.

Big vendors are big vendors because they put becoming big vendors above everything else, including even the survival of our species. Who cares what big vendors say? Maybe if you really want to make a difference, you should be telling people to stop listening to big vendors who absolutely do not have anyone else's best interests at heart. No matter what they say.

Collapse
 
jluterek profile image
James Luterek

One of the key aspects of microservices is to have many small teams that can work on these services. Then solo-developers or small developer teams attempted microservices and saw issues... leading to articles like this.

We are so quick to join tribes and push for our side at all costs, SQL VS DocumentDB, Monolith vs Microservices, etc.

These are tools! Evaluate the conditions and choose the best tool for the job at hand. Microservices are still the best choice when building a massive application that requires massive scale and performance. Microservices are **still **the wrong choice for solo developers or smaller projects.

Collapse
 
efpage profile image
Eckehard

What about the "Object Oriented Approach"? That was - with compiled languages - a valuable tool to build modular software. Today the principles are often misunderstood or forgotten, but in practice this was an approach to bring different worlds together.

If History repeats itself, maybe this is the next hot topic?

Collapse
 
codenameone profile image
Shai Almog

Sorry, I don't follow the comment?

Object oriented is doing great...

Collapse
 
defufna profile image
Ivan Savu

Adoption of microservices architecture is frequently a case of premature optimization.

Collapse
 
chamacr profile image
ChamaCR

Good article, I do believe microservices have been pushed by cloud providers. Having said that you didn’t mentioned anything about testing and deployment with Monoliths which is a real pain in the a…

Collapse
 
frothandjava profile image
Scot McSweeney-Roberts

Is Modulith just a trendy new name for SOA?

Oddly enough, I've got an old monolith that I want to break up into separate services but fine grained microservices feels like overkill.

Collapse
 
codenameone profile image
Shai Almog

I hope not. I've had enough of ESB, SOA and that whole fruit salad.