I used to work at a company where I ended up building the API that powered the company’s main product. End users would use the platform through a Single Page Application (SPA) that was built in Silverlight. A user could use any of the product’s four main features that were selectable from the UI’s home page. While related from a product point of view, each feature (and its sub-features) made calls to logically separate areas within the API.
For example, there was a Settings area where users could configure various options that would affect how other areas of the platform behaved. The section of the API that supported these pages was architected in such a way that they had their own controllers and services; these were only used by calls made by the Settings pages.
This was also true of the product’s three other main features, all of which were accessible from the home page. Each had their own set of controllers and services that were logically separate in the codebase. However, all code lived in a single solution. In other words, it was a monolith.
Monolithic Architecture
The Advantages of Monoliths
This choice of architecture meant that it was (relatively) easy to get set up and running. It’s true that there were many things to install and tweak while building a new development machine. But once configured, running the solution was as simple as pressing F5
in Visual Studio to spin everything up.
As everything was in the same solution, debugging was simpler. The code for the entire product had been loaded, so it was ‘just’ a case of finding the right place to put debugging breakpoints.
This also meant that deployments were easier to handle as only one project needed to be deployed, albeit to multiple servers for scaling purposes.
The Disadvantages of Monoliths
As more features were added, the codebase progressively grew larger. At some point we started to understand the disadvantages of monoliths.
Visual Studio’s built-in refactoring tools were less advanced at the time than they are today. Installing ReSharper was virtually a necessity. While enhancing the development experience, ReSharper needed to fully analyse a project to work at its best. CPU, memory, and storage costs grew with the size of the project. We’d have to wait for a re-analysis of the project every time that we switched branches, and the relatively low storage capacity of our machines meant that the analysis cache files took up precious free space.
It also meant that the entire system was unavailable during deployments even if the changes made were for specific areas of the platform. Despite the system being behind a load balancer, deployments were done during low usage periods – typically at unsociable hours.
Microservice Architecture
The Plan: Splitting the Monolith
Silverlight was being discontinued. We saw an opportunity. The UI would need to be rewritten using modern technologies, and we decided to revisit the system’s architecture.
We planned to create new UIs for each of the four distinct areas in the product. Each set of UIs would be powered by a separate microservice, complete with redesigned data models. We wanted the project to move as quickly as possible, so we decided to leverage the services that already existed.
The new set of microservices would initially act as facades. We designed their APIs and data models but didn’t (re)write business logic and database code. The microservices instead mapped incoming and outgoing data, passing it to and from the monolith that already did this. In other words, they would act as intermediaries that translated data for use between the new and old systems.
This meant that the new UIs would initially be less performant due to the additional network traffic. We accepted this compromise as it meant that both new and old systems would be available to use in parallel. In addition, they would both work on the same data set. This was important to avoid duplicate customer data, and to not get into situations where we didn’t know which data were more recent. The plan was to port the logic over once everything had been redesigned and put in place.
The Advantages of Microservices
Having separate and dedicated services for each set of UIs made the new APIs smaller and more tightly focused. They loaded much more quickly because there were fewer dependencies and less code. The projects were also much easier to navigate, as each only contained code that was relevant to that API.
This architecture allowed each section of the product to be deployed independently. For example, we would no longer have to take down the whole system just to update the Settings pages. This was a massive advantage especially during the earlier stages of development, as the APIs weren’t yet stable and required regular updates.
The Disadvantages of Microservices
However, this separation was a double-edged sword. These advantages also came with a cost.
The product became more difficult to run locally. This wasn’t so bad when only actively working on one service at a time – we could run Release builds of the other APIs in the background. However, we’d need to run multiple instances of Visual Studio when debugging an overall workflow.
We also needed to wrap up any common models into NuGet packages for sharing between projects. This made debugging shared code difficult as we had to switch NuGet package references to their respective local project references if we wanted to step through code using the debugger.
A final consideration was deployment. While having a set of services meant that they could be deployed independently, it also meant that we had to keep track of where each service was deployed to. We had good deployment tools in place, so this wasn’t too big of an issue. But it was more complex than deploying a single service.
Summary
This was one of my favourite projects to work on. Unfortunately, internal priorities changed and my time on it ended. Among the many things I learned was a new appreciation for monolith vs. microservice architecture.
Monoliths are a good choice for simplicity. Everything is in one place meaning you can get up and running easily, and debug anything in the project. Deployments are simpler too as there are fewer moving parts.
However, you may find the codebase becomes unwieldly once it grows beyond a certain point. Projects take longer to load, and you may find them more difficult to navigate because they contain more code. In addition, service availability becomes all-or-nothing: as everything is in a single deployment, it’s not possible to keep parts of a system running while deploying others.
Microservices divide a system into multiple subsystems. You can deploy them independently, meaning you can keep some subsystems running while deploying others. Their codebases will be more focused and organised, meaning you’ll generally find them easier to work with.
However, you might find the overall system becomes trickier to run locally and deploy because more services are involved.
Ultimately, the most appropriate choice for new projects will depend on the type of system that you’re building. As with other factors, this is something that only you have the power to decide on.
Thanks for reading!
Level up your developer skills! Sign up for free for more articles like this, delivered straight to your inbox (link goes to Substack).
Top comments (0)