DEV Community

Cover image for The Five Dysfunctions of Serverless Development

The Five Dysfunctions of Serverless Development

Function as a Service gives magnificent power for shipping software faster without operational overhead. AWS Lambda is a service which used in e-commerce, healthcare, fin-tech, and other industries. Corporations and startups are jumping into the train of the Serverless movement.

The technology advantages always come with hiccups. During my day-to-day work in different teams, I have noticed multiple issues with the development process while using AWS Lambda and related Serverless stack. In this article, I will mention few ones from my practice. There will be solutions that could help to avoid these issues. Disclaimer: I would mostly talk about my experience with the NodeJS ecosystem.

Top issues with AWS Lambda Development:

  • Package size
  • Too much focus on Local running
  • One function has all routing inside (like express)
  • Too long function timeout
  • Fear of vendor-locking

Package Size

Lambda function is sent as ZIP to the AWS. When one uses AWS SDK and native language features, there are no problems. In the case of NodeJS, the function which calls the third-party REST API endpoint will have approximately 500 bytes. In a real project, it is not enough to have just one call. In most cases, there is a need for other dependencies.

The real-life scenario can be like that. The project has started, the business needs results. In the first sprint, the development team took the stories. It is going fine. Now there is 5th one. The CI/CD pipeline became quite slow, the Lambda AWS console is not showing the code and every lambda deployment has all the code for the whole project. The code could easily be up to 20 MBs now. This size is not worth it. Uploading this amount of data is a waste of resources and time.

How to avoid this? 

  • Use code bundler like webpack. It will pack all the external dependencies into one file. It can be handy to minimize your functions. 
  • Use lambda layers. It will help to put all the needed node_modules into separate packages keeping Lambda small in size. The advantage can be that common dependency like 'axios' could be shared between various lambda functions with no extra costs.

The first approach is the one I have been used, but the latter one looks promising. I am planning to try it at some point. I have not seen it to be used.

Too much focus on Local running

It is quite common to use SAM with the Serverless app. It provides this great feature of running your lambdas locally. Sounds cool, but in most cases, it would not help you to achieve an error-prone solution.

Running in the local environment won't save you from the problems with existing infrastructure. In the case of serverless, it is relatively easy to set up multiple environments with lambdas calling other AWS services. For example, it is much faster to deploy and test on the cloud than doing so with local dynamo DB tables.

My latest successful backend project had Lambdas, Dynamo DB table, SQS queues. We were using the approach where all parts were divided into lambda functions, services, and connectors. Unit tests covered the code by these parts. When there was a need to see functionality working, the test branch could cover it without issues.

One Lambda - serves it all

There is a temptation for Node developers to take the Express application and 'decouple' it with Lambda. I think this is a bad idea. For example, one needs a CRUD application backed by API. Many people would prefer to start by initializing the express app and putting all routes into it.

In the end, this attempt will be the issue of having a code-heavy lambda function. It will be like a multitool or 'All-in-one' package. 'Fat' lambdas are hard to test and follow when something goes wrong, which is always mysteriously happening.

I like to think about lambdas as atoms or nanoservices. Just do one call to the third-party API or AWS service with the logic in it. When there is a case of multiple actions, consider switching to more event-driven architecture.

Decoupling Ideas

Possible ways could be:

  • SQS/SNS for scale
  • Step Functions for less frequent jobs

Loooooong timeouts

There is no need to have a function with a timeout of 30 seconds that will finish its execution in 10 seconds. It is a good idea to change it at some point in time after the deployment. This action saves money at scale and computing power. One could call it responsible cloud computing when one is not trying to get all the resources but having only needed ones.

Vendor-locking fear

Teams and companies tend to be afraid of this word. In my opinion, they rely too much on multi-cloud. Serverless will lock you to the provider. It is not an issue now because AWS has all you need. Of course, there is a risk that the company will choose another public cloud. However, it is too much effort to do so.

The teams that are using the Lambdas will most likely use API Gateway, DynamoDB, and SQS. It is not worth it to migrate after a while because of the effort needed for a change. Many teams have been happily using one public cloud for years.
There are companies out there that had started to use AWS Lambda in 2014 when it first appeared. If they need container workloads - they use Fargate with ECS. Maybe Kubernetes is not required on every use case, they think. It comes to a mindset.


This is an excellent practice to be aware of the issues mentioned in this article because it will save time for the future. These dysfunctions are the ones I encountered in my serverless-related projects. Do you have any bad practices from your experience?

I am an AWS Community Builder writing about AWS, CDK, Infrastructure-as-Code, Serverless, and Node development. There is one more article about lessons I learned after switching from CloudFormation to CDK.

Discussion (0)