DEV Community

Cover image for Is microservice architecture the best choice
Momcilo Davidovic
Momcilo Davidovic

Posted on

Is microservice architecture the best choice

Introduction

In recent times I have found a lot of articles criticizing microservice architecture. It is important to understand when and why to use microservice architecture. This kind of approach increases complexity inside the system, but gives a lot of flexibility in the future. On the other hand, this flexibility will not be visible if the approach to building microservices is wrong. In the case of wrong architecture only complexity will be visible.

Where not to use microservice architecture

For some use cases it is easier to specify where you should not use microservice architecture:

  1. Creating a POC (Proof of concept) - even if you know your app will be microservice- oriented, don’t waste time creating POC, mitigating complexity in the microservice approach.
  2. You know that the project will be small - if you know your project is a side project, or just that it will be small, don’t use the microservice approach just so that you can say you used it.
  3. You or your team don’t have enough knowledge about microservice architecture.
  4. You have a monolith that works well, customers are satisfied, stakeholders are satisfied, but you want to try new technologies because they are fancy.
  5. For a new big system where we aren't yet sure how it looks, it could cost us a lot - so it would be good to consider Modular Monolith here, which gives you the possibility to easily switch to Microservice architecture.

The Twelve Factor App

Before you start with the microservice approach you must be prepared, and have some level of knowledge about what microservices are, and where, why and how to use them. A good starting point is The Twelve-Factor App. You should read this documentation and try to understand the concepts behind it. If you see that your application will not fit in one or more of them, you should reconsider the decision to use microservice architecture. The Twelve-Factor app is about how your microservices should look from the DevOps side, in order to make maintenance, deployment and scalability easy. You should bear in mind that it is a very important aspect, because if your microservices can’t be easily deployed and scale they lose their main purpose.

Showcase - monolith e-shop database problem

Let’s set up an example to examine the pros and cons of microservice architecture. Imagine a big e-shop that has different products split into 3 sections: Books, Lego, and Food. Your stakeholders want all 3 sections to have different attributes, different presentations, and ways of searching. But in the end products from different sections can finish in the same shopping cart, and be shipped together.

Image description

Let’s imagine that every arrow represents 100 requests in one moment for a specific category, i.e. B-Books, L-Lego, F-Food. Those requests are everything from simple browsing, searching to checkout. Every request generates one or more requests on the database. Now let’s imagine that your eshop becomes popular after a successful advertising campaign, and loads are increasing. Because all requests go in the same database, and the search in the books section takes more time then the search in other categories, because of the problem in our code, applications start to hang because there is a problem with the database. The connection pool is full and all new connections are hanging before some resources are released. If you have experienced this problem sometimes, you know how dangerous this situation is. It can be fixed if we increase the connection pool, but this is just a temporary solution that first needs to be discovered and addressed, but sometimes it leads to other problems.

Problems with a monolith in eshop example:

  1. Search on the books section generates a JOIN query and that is the main problem causing connection problems with the database.
  2. The whole application is not working (hanging) when the connection pool is out of free connections. The Food and Lego sections don't have problematic queries, but because of the Book section, when users search in Food or Lego, they also experience problems.
  3. To fix a problem with the Book database query, some adjustments in code are needed and the application must be built, tested and deployed. These changes in code must be done on a shared module that is used by the Food and Lego sections as well, which means that a full test cycle on the application must be done.

From this simple showcase it is easy to see that things that were advantages in the beginning, start to be a problem in later phases of the application lifecycle. With the monolith we reduced complexity, and it was faster to finish the application, but future adjustments on code and optimization, as well as problems can be much more expensive and harder to solve.

Showcase - microservice eshop database problem

We now have the same problem as in the first showcase, which means a Books section search produces a JOIN query that takes too much time and the database connection pool becomes full, which causes application hanging.

The benefits of microservice architecture in the eshop example:

  1. Only users browsing the Book section will experience problems. The use of all the other sections, as well as checkout, are working normally without any problems.
  2. Fixing the problem requires only the Book microservice to be changed. Building and deployment are much faster.
  3. Sharing the resources is minimized and changes made in the Book microservices do not affect any other section, and testing can be done only on the Book section, which causes a faster release cycle.

This problem or showcase with databases can apply to many other crucial system resources.

Memory problem

The Book section loads a lot of PDF previews into the memory before they are shown to the users. When the number of users and the number of PDFs reach a limit, the application crashes.

Similar conclusions for the monolith are valid here:

  1. The whole application will not work when the memory limit on the server is reached.
  2. The whole application must be rebuilt once we find out the solution (eg. Adding some kind of external caching)
  3. We have to add a new configuration, adjust the pipeline for the whole application, and open some ports on the server in order to allow this cache to successfully communicate with our application.

On the other hand in microservice architecture:

  1. Only the Book section will not work when PDF previews reach the memory limit.
  2. We only have to rebuild the Book source code.
  3. We only have to adjust the Book pipeline, to open ports for cache communication only on the server where the Book application is running.

A hard drive problem

The Lego section needs a lot of pictures for every set, and administrators like to upload them a lot. They are saved on the hard drive but after some time the hard drive reaches its limit. To address this problem, we decided to move images on S3 so that the process of adding new images can scale automatically.
Now in the monolith we have a new configuration for S3, we have a new adjustment of the pipeline and also a new intervention on the server to enable communication with S3.
In the case of microservice architecture all of this will eventually be done on different applications, on different pipelines and on different servers.

General problems

Almost every other problem can fit this same pattern we have already discussed in the previous cases. And rest assured there will be other problems in big systems that you will need to deal with. In a single app they will produce:

  1. A big fat app
  2. A lot of configuration (cache, S3, MongoDB and who knows what else)
  3. A lot of set up procedures
  4. A lot of server adjustments
  5. A long building time
  6. A long testing time (once you build a whole app it is really risky to release it without testing the whole app, even if a story or bug are only related to one section eg. Book)
  7. Complex CD/CI pipelines
  8. Long release processes that only get longer with time
  9. Harder integration and e2e testing
  10. One big team or different teams working on separate modules, but all developers must be aware of the complete application problems and bottlenecks, and address them in the future code they have to write, even if their module is perfectly designed.

Problems in microservice architecture

“New concepts just replace existing problems with new ones”, as some skeptics like to say about new concepts. And that is really something that applies to microservice architecture as well. But when done properly, and with clear separation between services, with clean communication between services, things get better over time. All 10 points from the previous section can be turned around and the opposite is valid in microservice architecture:

  1. Small apps concentrate on a single domain (if you are using Domain Driven Design as a separation principle of microservices - more about that in the next articles on the same topic)
  2. Minimal or no configuration
  3. Minimal or no set up procedures
  4. Minimal or no server adjustments
  5. A short building time
  6. A short testing time for a single app
  7. A simple CD/CI pipeline
  8. A short release time
  9. Simple and easy integration testing and simpler e2e testing
  10. Teams dedicated to the single app, with great knowledge on the application domain, with a lot of autonomy on making decisions that create better apps.
  11. Easier horizontal scaling of a single part of the application that is identified as crucial, as a side effect of all previous conditions listed above

Well this looks great, so what is the problem with microservices? Why are we even thinking about a choice? Of course, microservices, as everything else, have their problems and let’s address just some of them:

  1. Data normalization - think about this problem in the same way as you think of database tables. Due to performance data is normalized (split) into different tables. Here data is normalized into different microservices - databases. And at one moment sooner or later you will have to denormalize it. You will have to deal with this problem depending on the specific case, taking into account different aspects of your specific problem.
  2. Increased complexity - you have a new story that is requested from your stakeholders and you remember your old monolith application where you had to change one service method to fulfill this request. Now because this story touches data from 3 different domains, you will have to create at least 3 different service methods, 3 events with the purpose to communicate, and who knows what else. Then you need to care about global transactions, and cases when the flow breaks somewhere in the middle, for example the first microservice finishes the job successfully, but the second fails. What should we do then? Revert the first one or recover the second one and continue? Those decisions then need to be discussed between teams and it can be very time- and energy-consuming to find a solution that suits everyone.
  3. Eventual consistency - there will be some data that will diverge from the expected values, and that can be the result of various factors (a global transaction error, broken communication between microservices, etc). Those use cases must be identified as soon as possible and if not acceptable, must be addressed and solved properly. On the other hand, if your monolith service method breaks in the middle of the execution, the whole transaction will be rolled back. And there will be no communication between services.
  4. Increased costs - increased complexity is enough to signal that the costs will be bigger. With bigger complexity you will need more mature developers to keep the system running, and you will need more developers in the summary of all levels, to be able to maintain all existing microservices and build new ones. This architecture also increases dev ops costs, in fact you will need dev ops that can help set up and maintain microservice architecture, which also means costs for cloud instances or set up in-house solutions. If users have better experience, and stakeholders can go faster to market with new ideas, those costs are acceptable, because it is not reasonable to expect better results for the same amount of money. But if this is not the case, then pressure is on the development department and it can be a sign that some decisions were wrong.

Before the conclusion

You do not always need to choose between those 2 approaches. It is not a “take it or leave it” choice. There are a lot of approaches that make use of concepts from those 2 and give you the opportunity to choose a solution that fits best in your use case. For now I will name 2 concepts that are somewhere between and that you should consider when making the best choice: Modular Monolith and Distributed Monolith. You can guess a lot based on the names, but I would suggest you delve deeper into those concepts and understand as much as possible before choosing them as a template for your solution.

Conclusion

As usual, there are no easy answers when things are complicated. You must consider different aspects and make the right decision. It is not an easy task and requires a lot of knowledge and experience, where I would place high importance on knowledge, because if you enter this field without enough principles that you will use and follow, it is easy to be on the wrong track that will have big consequences in the future.

Top comments (0)