DEV Community

Tobias Urban
Tobias Urban

Posted on • Edited on

Why serverless is awesome for prototyping

Cost comparison between serverless and traditional computing

With the rise of Cloud Computing new ways of hosting emerged, putting on-demand compute power instead of hardware limitations in the center of every organizations IT department. What started with Virtual Machines that could be spun up and shut down in minutes to optimize costs has evolved to many new technologies that are now de-facto standards in an ever growing number of organizations. Applications became "cloud native" and organizations were "cloud born" - and if you really managed to dodge these buzzwords until now, I recommend you to read this article to catch up with what modern IT Infrastructure is all about.

Breaking systems down with Microservices

These innovations all had similar goals in mind: Make your infrastructure more flexible and at the same time robust whilst minimizing your costs (remember, in the new world it's all about variable software compute power instead of fixed-priced hardware). Technologies like Docker and Kubernetes arised which made splitting up your application in small building blocks a real thing (this is called Micro-Services, another Buzzword that you can read more about here).

The basic idea behind this: Instead of having one app where every line of code references any other line of code (this is called high cohesion), build multiple modules that have well-defined interfaces that other modules can use (which leads to so-called high coupling). This way, you are more flexible in the way you build your application (you may for example want to build your app front-end in Node and the high-performance calculation in Go) and in the way you scale your app (same example, the frontend may just needs to forward user input so 2GB RAM on one or two machines could be enough, whereas your calculation engine may needs 32GB of RAM and on high traffic should scale to 8 machines).

Going even smaller with Nanoservices

And you can spin that idea even further: After Micro-Services followed Nano-Services, with the idea that every deployment you have in your architecture should only do exactly one single thing and is seperated into its own code package. What sounds like the ultimative mess for every IT Admin (and you're not wrong, it has its drawbacks) enables completely new levels of scalability and development flexibility.

This is where so-called serverless computing shines: If all your workload is split in tiny packages, the compute power becomes nearly irrelevant (as a single function usually doesn't need a lot of compute power) and flexibility becomes the top priority. And this what for example Azure Functions or AWS Lambda promise: You write single distinct functions that can be accessed by other services via API and upload the code into the cloud, and the cloud provider deploys your code "just-in-time" when it is called from another service. You as a developer only pay for the time that your code execution took and for how many times your function was called (usually priced at a few cent per a million calls).

This leaves us with two exciting things: We have a cheap service to host our application, and we have a lot of scalability that only charges us for how frequently our app is used. This is great for Start-Ups or prototypes where you don't want to overspend on infrastructure, where you usually don't have highly complex logic in your application and where you yet don't know how heavily your app will be utilized.

Serverless on everything ... right?

Of couse, not everything is perfect in the serverless world. And it will probably not solve all your problems you ever had with hardware management. The most striking reason against serverless computing comes from its core development ideom: By having thousands of standalone functions in your architecture, you create a huge latency overhead just for routing traffic through your systems. Even if you give all your apps into one datacenter, you still have to route the traffic within the datacenter, and if we think about visiting 20-50 stops before we successfully finished one request, this latency becomes significant for your response time.

Most cloud providers also usually offer only standardized VMs on which you can run your functions. For you that means that you have a cap on how much resources one function can consume. Although this shouldn't be an issue for most of your logic, you will have to search for alternatives when it comes to compute-heavy jobs like image transformation or AI calculations.

One additional thing you have to consider when going serverless: Your attack surface multiplies. A lot. You have to secure every single function, and every function on its own is responsible for validating access control, inputs and outputs. Most providers offer easy and compute-saving methods to handle the authentication part, but still you have to take care of securing every single function in your architecture.

And as hard as you try, you will almost never end up in a completely serverless way. Think about the following scenario: One of your functions sends out notifications to all of your users mobile devices. This triggers around a million function calls as your app already got quite popular. Absolutely no problem, serverless will handle everything for you and a few seconds later all is done (for probably just a few cents for all messages together). But you have one additional job for them: For logging purposes, your function writes a statement in a SQL database. If you never worked with SQL DBs: Shooting a million concurrent requests at it is a really bad idea.
So you will have to keep the big picture in mind, and design your architecture around the "weakest link" in your overall architecture. The best individual scalability won't help you a thing if you have one bottleneck that keeps the overall process slow. As a note before you ditch logging forever: there are already services by large cloud providers that even enable serverless storage databases which you can penetrate as much as you want.

So when do I use serverless?

Serverless functions have a lot of great use cases. If you have an app idea and you want to quickly test it and get started fast, serverless is a great way to achieve this. You don't have any fixed hardware cost, your app can scale no matter how many people use your app and you are forced into a dependency injection programming pattern which in the worst case leaves you with a greatly structured monolith if you decide to switch to traditional development. And the most striking concept still is: You can basically develop your service for free. Most cloud providers offer the first million calls to a function per month for free, which is more than enough to get you started. From here, you can start promoting your app and acquiring users. And as you do, your bills scale proportionally to your users, giving you the chance to scale with your users demand (and probably your income from new and paying users).

Serverless is also a go-to choice for all kinds of proxies, helper functions and small handlers. If you want to pipe a request from one service to another (e.g. from a queue to a database), serverless got you covered. And you could even automate your IT infrastructure with functions: Just write the REST calls to start / stop / administrate your services as a serverless function, and define protocols in which situation which operations should be applied. And here you go, a fully automated IT infrastructure right at your fingertips.

Have you ever worked with serverless functions? What did you like, what were your pitfalls?

Top comments (1)

Collapse
 
dansilcox profile image
Dan Silcox • Edited

Great article, there are so many great ways to use serverless technology and probably just as many ways in which you shouldn’t! It’s all about knowing your use case but as you said, serverless first is a great way to start as you can always make your infra more permanent later.

One pitfall I’ve experienced is ‘cold start’ time on serverless architecture. There are numerous articles that address this for different tech stacks and programming languages etc (AWS lambda has layers for example, or you could use some sort of ‘no-op’ or ‘warm up’ call to keep your function alive and ready to go). Worth thinking about the impacts of a cold start ahead of time though, and planning accordingly.