DEV Community

Cover image for How a Monolith Architecture Can Be Transformed into Serverless
Kyle Galbraith
Kyle Galbraith

Posted on • Edited on • Originally published at blog.kylegalbraith.com

How a Monolith Architecture Can Be Transformed into Serverless

There is a growing audience surrounding serverless and many are keen to take advantage of the benefits it provides. There are a lot of different resources out there surrounding serverless, but they tend to focus on how you get started. They tend to focus on how you can build something brand new using serverless architectures.

But, this is missing a larger audience. Those that have existing codebases that are difficult to move to serverless. They often come with a large sum of technical debt. They makes serverless appear like nothing more than a fantasy.

So in this post, I want to focus on how a monolith application can transform into serverless over time.

We already know that anything brand new could be built using serverless. But how do we take the application or service we have today and make it serverless? The simplest answer is that we rewrite it. But this is expensive from a development perspective and depends on what needs to be rewritten.

I propose that we pause a second before we jump in and begin rewriting our codebases. Let's look at how we would move the components of existing applications to serverless. By going through this process we can likely surface the things that will go well and the potential pain points we could encounter. Once we have those two things in hand we can develop a plan on how we would move our legacy application, or at least pieces of it, to be serverless.

Thinking about serverless

There is a healthy debate surrounding serverless. It's pros, it's cons, and even it's definition are up for debate. This isn't a post where I intend to debate those things once more. But, I do think it's valuable to share the definition that resonates with me.

Serverless is a cloud-first architecture that allows me to focus on delivering code to my end users. There is no provisioning, maintaining, patching, and capacity planning of the underlying servers. Scale and availability to support workloads large and small are handled by the cloud provider.

In the ideal world, serverless frees me from having to deal with the servers my code runs on. With that freedom, I can focus on the code that delivers value to my end users. That is the core of serverless in my mind. Is it idealistic at times? Yes, as developers, maintaining servers isn't the only non-code related thing we have to do. But it's still worth striving to this idea.

So, when I am considering moving an existing service to serverless I force myself to think about the cost and reward. Like any other architecture tech debt decision you make. The reality is that making technical debt decisions is challenging. Gauging the cost and reward of each one is complex, non-trivial, and hardly ever perfect.

So as we are exploring the idea of moving a legacy piece of code to serverless, we should acknowledge that this is a very hard problem to solve. It has many solutions, some better than others, and some not even worth doing. But let's at least explore how we can move from a big monolithic application to a serverless architecture.

Step 1: Grounding ourselves in reality

To get started, let's put this out there right now:

$ echo 'Not everything fits into the serverless model.'
Enter fullscreen mode Exit fullscreen mode

This is an important thing to get out there from the get-go when you are considering serverless. It doesn't work for everything, at least not today.

That said, it does work for a lot of things and some of those things may not be intuitive or natural to you. Remember, this is a different paradigm than what many of us use today. But it's not much different than how we already compose our applications.

When thinking about investing the effort to transform our current architecture into serverless it's important to ground ourselves in reality. Establish a clear reason about why you believe moving to serverless is best for your application or service. If you can't get past this stage you should reevaluate the path you are about to embark down.

These reasons are going to be unique to you and your workload. So think about what you want to gain by moving to serverless. Your driving reason could be cost, you don't want to pay for servers running around the clock for the 1-2 hours a day you have traffic. It could be scale. You want to have scalability without being responsible for the underlying infrastructure.

Whatever the reason is for your move to serverless, make sure you confirm that the importance is what you think it is. Each step that follows is going to be a challenge and will test your reasoning for this journey.

Once we have grounded ourselves in the reality of what we want to accomplish it's time to start evaluating our application.

Step 2: Dividing the movable from the unmovable

This is where the fun begins. It's time to start looking at what things in your application can move to serverless right now and what things you think can't.

It's not necessary to overthink here. Things that are very easy to move in your monolith tend to jump out at you. Versus things that seem difficult from the outset are likely better to shelve at the moment.

At this stage, it is often helpful to think about the constraints that exist inside of a serverless architecture.

  • Depending on your serverless cloud provider, a single function has 1-15 minutes of execution time. It needs to launch, complete the necessary work, and then exit. This time limit is configurable, so validate how long you believe it is going to take to complete the work and set your time limit to that.
  • The cold start latency is real. Depending on your use case you can notice considerable latency delays the first time your serverless function is invoked. There is a lot of factors that can contribute to this and there are a lot of things you can do to help cut this down.
  • Disk size constraints are another thing to keep in mind. Inside of a serverless function, you often don't have gigabytes worth of storage at your disposal. But, you usually have some kind of tmp or scratch storage if you need it.
  • Memory is constrained as well and depending on your cloud provider you have up to 3GB of it. This tends to be less of a constraint than the others listed in here, but it's still important to keep in mind.

These are the core constraints that are important to keep in mind as we are thinking about what pieces of our monolith can be moved and what pieces can't. There are other constraints around deployment size, payload size, environment variables, and file descriptors. But, these constraints are applicable to only a small number of workloads and likely not as important to you.

An important thing to note is that these constraints can be worked around. However, when you are embarking on this journey you should avoid those workloads for the time being. They can tend to spiral out of control when you are new. I can assure you that there are other workloads that are easier to transition initially.

Here are some high-level things that I would consider movable out of the gate. Note: The services mentioned here are specific to Amazon Web Services, but applicable services exist outside of AWS.

CRON Jobs

CRON jobs are a great place to start, as long as they fit into the constraints mentioned above. These tend to be automated processes that we all have running that are usually doing some mundane tasks for us. If you're lucky these are running on their own instance, which means when you move them to serverless you get to kill an instance 💀

These jobs also tend to be outside of the main development flow of your application or service. Which means mistakes may have a lower blast radius. Thus CRON jobs are a great place to start your serverless journey. You get to get familiarize yourself with the paradigm, learn some lessons, gain a bit of value, and hopefully not disrupt your users.

'Glue' Services

I'm sure there is a more technical term than 'glue', but let me explain what these services are in my mind. First, service is a loose definition here. It could be a separate service running on its own instance, or it could be a service layer inside of your monolith.

A glue service is a service that acts as the mediator between one or more other services. In other words, it glues two services together. Often times these services are doing transformations or relaying messages between services. So this workload can fit well into serverless as long as it is stateless.

Moving these types of services to serverless means you need to think about their contracts. How does this workload receive inputs and how does it pass along outputs? The inputs could be received via an API Gateway if you already use HTTP. Or if your client services just want to send the message along, your inputs could come from SNS topics.

These 'glue' services can often fit into serverless right out of the gate with the caveat that they don't hold state. State can be tricky in a serverless world. This is because the compute power is ephemeral, even more so than a VM, so you have to keep that in mind. It's not an impossible problem to solve, but it can move something from 'movable' to 'hold-off' for now.

Email Services

A lot of applications have code or services that send emails to their users. Depending on how entangled this is in your codebase it could be a good candidate to pull out and make serverless. Like CRON jobs, these services could likely run in the background and thus are not user-facing. They can impact the user if we fail to send an email which is bad, but the user experience shouldn't be directly tied to the actual sending of the email.

Sending emails has very different implementations across all different types of use cases. In general, they tend to follow a flow like this:

  • User completes some action or they want to be notified something has happened.
  • There is often an email template that is used for the event that has happened that the user needs to know about.
  • The email is generated via the template and the event details.
  • The email is then sent to the user.

There could be more or fewer steps in here depending on your use-case, but this is a general flow of logic.

This flow fits into serverless. If your lucky, your email delivery may already be separated out from your monolith. In that scenario, you can move the logic into serverless, change how services pass messages to it, and your probably close to there. If your email delivery is stitched into your application that needs to be decoupled first and then you could look into making it serverless.

These are some high-level ideas that fit into the serverless architecture out of the gate. This is not an exhaustive list as that is dependent on your own workload and requirements. But, we can summarize the movable things as generally meeting these 3 requirements.

  1. Generally, not customer facing. While not a hard and fast rule, your first things to move to serverless are generally not customer facing. The reasoning behind this varies, but for me, it comes down to learning this new paradigm while not disrupting user workflows.
  2. Not coupled to some synchronous API. The more things are decoupled and asynchronous when your first moving to serverless the better. Why? Because this is where the architecture thrives. By keeping things async and distributed we allow our workloads to scale independently.
  3. Aim for workloads or services that have clear boundaries. I realize with large applications this can be a real challenge. However, the more you can define a boundary for a given service the easier it will be to move to serverless. The reasoning is rather simple, clear boundaries define clear contracts. If we move a given service and the contract isn't clear or gets muddy, we can end up with distributed tech debt rather than monolithic.

Now that we have principles that define our movable services. Let's start thinking about how we would actually move them and the tools available to us.

Step 3: Moving the movable

Now that we have separated our services into two groups, the movable and the unmovable (at least for now). We can start thinking about how that move actually happens.

This is a variable step that is going to depend on your own codebase. Some things you might decide to rewrite to leverage the serverless paradigm to it's fullest. Other things you might be able to "port" over to a serverless world because it is well suited for it. Or there might even be things that you start to move and realize this isn't movable.

All three of these scenarios are valid. But it's valuable to focus more on the first two because those are the "moving forward" stages.

With that in mind, let's think about when a rewrite might happen.

A rewrite of an existing service into serverless may be necessary for any of these high-level scenarios.

  • The language/framework won't work in serverless. Perhaps the service is written in Cobol, uses Spring Boot, or makes heavy use of native binaries. This use to be a much more common problem, but with the introduction of AWS Lambda Layers it's actually less of a problem.
  • The service is tightly coupled into the monolith. This is a very common scenario that I tend to see in older code bases. We want to pull that service out but we likely need to strangle the old one and build up a new one. Check out the Strangler Pattern for that, even if serverless isn't in your future.
  • The existing code isn't performant enough to run in a serverless environment. Another common scenario. Maybe the execution time won't work for this workload or memory is too constrained.

These are the three scenarios I tend to see most often when it comes to making the case for the 'rewrite into serverless' case. We could likely come up with more so use your best judgment when deciding to pursue the rewrite path.

The second path is preferable because it avoids the extra overhead of recreating an existing service.

Like the previous path, we can envision some high-level scenarios where this path is possible.

  • The language/framework is supported in serverless. Seems simple to say, but this is actually a huge win. If your service is already in a language or framework that serverless providers support out of the box. In this scenario, we can port this over to run in a serverless environment. This often means adding the necessary code to run in a function handler, tweaking monitoring, updating logging, and making any additional configuration changes.
  • The service can run inside of a Docker container or is already containerized. Remember in the earlier path when we said languages or frameworks that were not supported out of the box in a serverless environment need to be rewritten. Well if you're using AWS, that might not be your only option. img2lambda lowers that barrier and makes it possible for you to bring those workloads directly over by using Lambda Layers.

That is two paths we can take when beginning our transition of moving the movable workloads in serverless. There are other approaches here, containerization is one that is seen quite frequently. Moving a workload to a container can be a nice middle ground when thinking about transitioning to serverless. But, closer analysis might reveal that is an unnecessary step and you should consider one of the earlier two paths.

Step 4: Moving the unmovable

But what about the things that we deemed unmovable?

The first question to answer in this scenario, is why was it unmovable when you first decided that it was? When we're new to serverless we deem something as unmovable because of the constraints around serverless.

  • Limited execution time, 1-15 minutes, depending on your cloud provider.
  • Cold start latency associated with serverless workloads.
  • Disk size constraints, we only have a small amount of scratch space.
  • Memory is constrained, up to 3GB, depending on your cloud provider.

Can things that run into these limitations be moved as well? Of course, they can, but you are likely going to have to do some refactoring. Let's look at each of these and sketch out some high-level ideas you could try to remove the limitation and turn this into a 'movable' service.

Limited execution time

When serverless architectures were first introduced this was a controversial limitation. We tend to think of programs and applications as running indefinitely. But that's not necessary with modern cloud computing.

If we think of an auto-scaling group in the cloud, it scales out and scales in, starting and killing workloads as it does so. Serverless is not drastically different in this regard. Except that we have a shorter amount of time to finish our work before our 'instance' is gone from underneath of us.

If you are stuck at this limitation you may need to reimagine how this particular service works.

Is this service working on things in bulk? If so, create a function that fans out this work and another serverless workload that processes a single thing. By fanning out the work we can take advantage of parallelization to lower our execution time.

Can the service be moved from synchronous to asynchronous? Often times we start with the former because it is simpler to start with. But, asynchronous allows us to process work in the background and be even more strategic with our execution time.

There are many other scenarios where this limitation can arise. My best advice is to think creatively about how you could refactor things to work in a serverless world. You may still decide not to go that path, and that is normal. But the exercise should at least get you thinking more about why it's not possible.

Cold start latency

This is still a blocker for many folks looking to move to a serverless architecture. The time between your function being invoked and it actually beginning its work is what we refer to the cold start.

It only happens when there is no previous container/image/environment laying around for your serverless workload. In which case a new one must be launched, the code must be initialized, and only then can it begin its work.

This problem is quite obvious in AWS Lambda when you are running your workload inside of a VPC. You often do this so you can connect to database services like RDS. Because of this problem, Lambda will actually keep these environments hot for an extended period of time. That said, your synchronous APIs will likely still notice the cold start on that initial hit.

So what are some solutions here?

The well documented "best practice" is to ping your workloads to keep containers warm. This is a hack and smells funny but its the best we got at the moment. Serverless actually has a plugin that does exactly this. In the case of AWS, you create a CRON job that every 5 minutes invokes your Lambda function. This keeps a warm environment for that function so you can minimize the cold start.

But, cold start latency isn't only on the cloud provider, it's on you as a developer as well. The more things declared inside of your function handlers, the longer it takes to get to work.

Things that are global or can be reused across function invocations, should be declared outside of your function handler. This is to leverage cold vs warm start again. Things that kept outside of your handler will not need to be reinitialized from a warm environment.

Cold starts are a constraint everyone moving to serverless should think about. Are they a blocker? In most cases no because in rather active applications the environments will likely be kept warm. But if your workload is very spikey then you could see frequent cold starts and thus you will want to think about the strategies above.

Disk size and memory constraints

To be honest, I have never encountered these limitations running serverless workloads.

That said, there are some high-level things you can think about changing if you encounter either disk or memory constraints.

In the event of a disk constraint, you are likely managing some kind of state in your function or operating on a very large file or collection files. In the former case, keep your state external and stream it in rather than reading it all at once. This is good for memory and for disk space.

For working with large files. If it is an option, you might want to consider streaming the file and fanning out the work on it. If you can stream it then you can likely tell one function to work on this first chunk, the next function to work on the next chunk, so on and so forth.

These are pretty hard constraints to work around in a serverless environment, but they are not impossible. However, they should prompt a discussion around whether this workload is right for serverless right now.

Step 5: Setting yourself up for success

Once you have the path you have determined to be viable from step three you are likely going to start thinking about implementation. Before you get too far down that trail though, here are some tips, processes, and tools that can aid in your success.

  • Use Infrastructure as Code, your future self will thank you. Think about it. Your going from managing one big application to likely managing many serverless workloads. You're going from centralized to distributed. A recipe for disaster is provisioning those distributed services by hand. Use tools like Terraform, Serverless Framework, CloudFormation, or Pulumi to make this management far easier.
  • The Twelve-Factor app will make your life easier. Chris Munns, AWS Serverless Advocate, has a fantastic blog post that focuses on the methodology in a serverless environment.
  • Decoupling services from one another can define clear contracts and enable individual scaling. Again, not a new concept at all but one that will elevate your serverless game. The more services can be async and decoupled from one another the better. Have services pass messages to one another rather than calling each other. Work from queues of events rather than in response to a single request.
  • Walk before you run. Start with smaller bite size services as you start moving to serverless. This will help you establish good patterns and practices for yourself. It will also reveal some of the pain points you are likely to encounter.

Conclusion

Serverless workloads are one future for cloud computing, but they are not the future. We are going to need all kinds of computing platforms as technology advances. Somethings likely won't fit into the serverless model at least out of the gate.

That's OK. But it's not a reason to not take advantage of it where you can.

The serverless architecture frees us from the responsibility of provisioning and managing the underlying compute power our systems run on. By being free of those complexities we can focus on writing the code that delivers value to our users. That is the value add of a serverless architecture.

It's not perfect. It has warts and oddities that will get in the way of your journey, but most can be solved by taking a step back and thinking a bit different. Some won't be able to be solved and that's OK as well. Nobody is saying that your monolith applications can become serverless overnight. But they can incrementally move there with a well thought out plan.

Are you hungry to learn even more about Amazon Web Services?

If you are looking to begin your AWS journey but feel lost on where to start, consider checking out my course. We focus on hosting, securing, and deploying static websites on AWS. Allowing us to learn over 6 different AWS services as we are using them. After you have mastered the basics there we can then dive into two bonus chapters to cover more advanced topics like Infrastructure as Code and Continuous Deployment.

Top comments (3)

Collapse
 
paulswail profile image
Paul Swail

Great article, Kyle!
I'm about to start down the road of migrating my monolith SaaS app to serverless (read post here) so a lot of your points will come in very useful. 👍

Collapse
 
kylegalbraith profile image
Kyle Galbraith

Awesome! I will be following your journey Paul.

Collapse
 
xngwng profile image
Xing Wang

one of the more fair articles on Monolith vs Serverless, agree 100% with this line:

 echo 'Not everything fits into the serverless model.'