DEV Community

Cover image for A broader perspective of serverless
Wesley Chun (@wescpy)
Wesley Chun (@wescpy)

Posted on • Updated on

A broader perspective of serverless

TL;DR:

Developers for the most part have a fairly narrow view of serverless. "Lambda" and functions/"FaaS" sure, but it's more than that because it can be entire apps too. Also, while containers weren't previously involved, containerized apps can now be serverless too. Some cloud service providers (CSPs) only have one serverless product, Google has four(!). Let's broaden your perspective of a misnomer that should be one of your primary compute options, or at least, your first. Why? It lets you focus on building solutions, not what they run on. Aside from AI/ML, serverless is always a hot computing topic and sees an ever-increasing amount of corporate spend. If you want to learn more about serverless and what Google's primary serverless platforms are, stick around!

Serverless computing with Google

[IMAGE] Serverless computing with Google

Introduction

Are you a developer interested in using Google APIs? You're in the right place as this blog is dedicated to that craft from Python and sometimes Node.js. This includes discussions on using API keys to access a variety of Google APIs, crafting generative AI apps with the Gemini API, utilizing Google Workspace (GWS) APIs like the Drive API to export Google Docs as PDF, and more!

This post focuses on serverless, a topic that usually isn't covered enough, and once in a while, is covered too much, such as in Spring 2023 (more on all that later). First, it's a "lie." Of course there are servers, but the point is that you don't have to think about them. Kidding aside, what is serverless, really?

Serverless is CSP-managed infrastructure to host your code and make it globally accessible, with "code" being functions, web apps, mobile backends, or containers. This step of "outsourcing" the hosting of said code to your cloud vendor frees you from thinking about computers or virtual machines (VMs) to run your code with as well as other considerations to get your code online.

Motivation: Why serverless?

Now why should you consider hosting your code on serverless? Here are some key benefits:

  1. Infrastructure boilerplate
  2. Innovation speed
  3. Autoscaling and virality
  4. "Cost savings"
  5. Market growth

Infrastructure boilerplate and innovation speed

Say you were inspired with a great app idea, built a working prototype, completed an MVP (minimally-viable product) that works with the most common use cases, added good error-checking, implemented the requisite test suite, and now you're ready to push a private alpha online to share with potential users. Now what?

This begins an almost completely different journey of thinking about hardware, like computers, virtual machines, operating systems, storage, and networking, all of which are orthogonal to your app, cross-cutting functionality that applies for any app. The issue is that if every app needs these things, why must all developers be forced to reinvent the wheel each time? In comes serverless to the rescue.

With the CSP taking care of this basic "infrastructure boilerplate," developers can focus on the solutions they're building and less on what they run on. This accelerates development, increases the speed of innovation, and helps you get to market sooner.

The 80 and the 20

"The 80/20 rule," otherwise known as the Pareto principle, when applied to software, is something to consider when discussing serverless. Time-wise, 80% of a feature is implemented in just 20% of the total time while the remaining 20% takes 80%.

There's a similar 80/20 rule for design, specifying developers start by crafting a relatively straightforward solution that works across 80% of the accepted use cases or inputs. The remaining 20% are "edge" or "corner" cases that are unsupported or require additional effort to implement, say for "v2." Serverless follows this rule too, handling 80% of the expected use cases. However some platforms can also handle the other 20%.

Autoscaling and virality

One of the most valuable features found in some serverless platforms is the ability to auto-scale. If you stumble upon that rare chance of going viral, rather than panic because you didn't architect your system to handle it, the platform takes care of it for you. Such platforms are a godsend for startups which typically can't afford to build the scalability and because they have a finite amount of resources (time, money, labor).

Serverless platforms with autoscaling capabilities spin up cloud-based resources dynamically as needed to serve user requests, stay deployed as volume dictates, then scale down automatically as traffic wanes. The alternative? Let's say you're a startup that built your own infrastructure but didn't spend enough capital to give it autoscaling features when your app went viral.

The end result is that new users won't be happy because they can't reach your service. Similarly, existing users can't connect either, resulting in catastrophe for your entire user base, and thus, your business. Serverless with autoscaling takes this possibility out of the equation, and you get the added bonus that no one on your staff has to carry a pager.

"Cost savings"

Let's not forget the other extreme. What if you "built it, but no one came?" Serverless can handle that corner case too, with the ability to "scale to zero." If users aren't using your service, no instances of it are running, and you're not getting billed. Contrast this with VM-based compute solutions where you're paying 24x7 for those VMs whether your code is running or not.

On a larger scale, what if you spent too much venture capital on infrastructure but not enough on hiring enough engineers to build that awesome app? No users means no revenue, and unfortunately, no business either.

Neither scenario has to be the case if you use serverless, making it even more attractive to startups whose services could go towards either extreme. The ability of the same platform being able to handle those situations are a "bonus" in addition to serverless doing its normal job of serving user requests.

📝 What else about cost you should know about

  1. Cost center shift: By outsourcing the code-hosting, you will move to "armchair infra management," shifting the cost from hardware ("CapEx") to cloud vendor ("OpEx"). Sure you've "saved" money by not needing to hire extra administrators to manage the infrastructure, so now it's on your serverless bill. How much you save or pay extra depends on the app as covered above.
  2. (Lack of) Predictability: One of the challenges that serverless brings is because costs are incurred when your code is running, but if your traffic is unpredictable, so are your costs, right? Most CSPs provide a usage estimator that gives you an idea of your monthly bills, so take advantage of that. The most important thing is to track your traffic carefully so that there are no surprises.

Market growth (actual and projected)

One more reason to consider serverless is: "everyone else is doing it." According to analysts from several research firms(^), the corporate spend on serverless has been growing by leaps and bounds since the mid-2010s, falling somewhere short of doubling every other year, from nearly $2b to nearly $200b USD, and illustrated below:

Serverless spend: actual & projected

[IMAGE] Serverless spend: actual & projected

 

No one has a crystal ball, but as more companies put more services online, fewer can afford their own data centers with staff, so public cloud usage will continue to grow, including serverless. It's pretty clear why it's so popular given how it helps organizations... they want to: be first-to-market, spend more time building solutions and less on what they run on, control costs, handle the common as well as edge cases, and so on.

If you're excited about serverless, that's great because you can see its potential. That said, why or when wouldn't you consider serverless?

"Demotivation:" Why not serverless?

Right and wrong serverless use cases

Part of the "magic" of serverless is also its bane. The secret sauce of serverless is its ability to automate the infrastructure on your behalf. This means that serverless platforms are naturally more expensive than VM-based compute solutions where you are managing the infrastructure. There's no such thing as a free lunch, is there?

You incur costs when serverless is running, meaning when an app or service is receiving requests, servicing them, and responding to users/clients. This means that whether serverless is right for you depends on the type of traffic your apps are getting. The best serverless use cases are apps that get:

  • Low traffic
  • No traffic
  • Unpredictable, irregular, spiky traffic (viral or not, high or low volume)

Examples of the above include student projects, mom & pop storefronts, school websites, sporting events, concerts, celebrity weddings, etc. Outside of the above, serverless is probably not the right solution.

Services that experience steady, predictable traffic on a 24x7 basis represent workloads that are better served by VM-based solutions. If you're not scaling to zero, any cost savings for low traffic or idle time disappears, and overall, you're not taking advantage of serverless and paying more. Similarly, these apps likely won't encounter viral spikes that would be handled by serverless autoscaling capabilities. In the long run, these types of apps/services will cost more on serverless than with VMs.

Moving from serverless to VM-based compute

Starting out with serverless is fine and gets you to market sooner. If your service falls into one of the categories above, serverless is the right fit and will save you money over time. However if you end up with steady around-the-clock traffic, shifting your service to VM-based compute will be the best way to save money moving forward, but serverless still "did its job" in that it helped with your first-to-market objective.

Conversion can be made easier if you designed your application flexibly to begin with, say by using a container. Moving a containerized serverless app to a containerized VM-based app is less effort than if it wasn't originally containerized. Containers make great "application shopping bags" in that you can more easily move them from a "serverless shopping cart" to a "VM shopping cart." There are also several VM-based compute options if you do have to convert.

If your system is fairly basic with few services, consider a VM or a fleet of them, depending on your traffic volume. On the other hand, if you have a more complex, multi-tiered set of services and components, etc., Kubernetes (self- or CSP-managed) may be the way to go.

📝 Autoscaling VM-based solutions
If your app is decidedly unsuitable for serverless but its traffic experiences elements of virality and drop-off where it may benefit from some form of limited auto-scaling, some CSPs like Google Cloud (GCP) have you covered. For example:

Amazon Prime Video: serverless "anti-pattern?"

One striking (and public) example of a VM-based solution winning over serverless is Amazon's Prime Video service whose team built their system on AWS Lambda (similar to GCP Cloud Functions [GCF]). To measure and gauge QoS (quality-of-service), its product team built a tool to monitor user streams in order to identify quality issues and kick off processes to fix them.

Based on increasing popularity, the tool would grow to run regularly "at high scale." As their bills soared, the team eventually collapsed their microservices "back" to a monolith and moved everything to AWS Elastic Compute Cloud (EC2) VMs (similar to GCE) and AWS Elastic Container Service (ECS) managed Kubernetes service (similar to GKE). As a result, they experienced more than a 90% drop in cost, publishing a well-known post explaining what they did and why.

There has been much commentary on this "reverse migration" from "microservices-to-monolith," including this take from long-time technologist, Adrian Cockcroft, affirming that serverless is not meant for all use cases. While prototyping on serverless is fine, developers should track emerging usage trends to make the best decisions on whether to stay with serverless or move to VM-based solutions at an appropriate time for their workloads.

If you're in "prototype and first-to-market" mode somehow convinced to further explore serverless, let's see where it lies in the overall cloud computing space and take a look at Google's serverless solutions.

Google serverless platforms and cloud service levels

General cloud service levels

There are three generally-accepted primary cloud computing service levels, determined by what you are "outsourcing" to the cloud:

Acronym Service level Description
SaaS Software-as-a-Service cloud-native apps; outsourcing of apps
PaaS Platform-as-a-Service outsourcing of app-/function-/logic-hosting
IaaS Infrastructure-as-a-Service outsourcing of hardware, power, networking, cooling
Cloud computing service levels

 

If you're new to cloud computing, think of these this way:

  • Instead of buying and installing software on your computers, you're using apps in the cloud (SaaS), meaning you're either using the apps from a web browser or mobile app.
  • Instead of hosting your apps or functions on your own machines in (leased or owned) data centers, you're hosting them with CSPs (PaaS)
  • Instead of buying and installing compute & storage in your own (leased or owned) data centers, you're using hardware provided by CSPs (IaaS)

Cloud service levels

[IMAGE] Cloud service levels and CSP products

 

The diagram above illustrates the three service levels and some CSP products that belong to each. With any large ecosystem, there are bound to be exceptions to the rule. There are CSP products/services that don't fall neatly into one of the three service levels, but rather, "in-between." Serverless does cross those boundaries.

Serverless cloud service levels

A closer look at serverless reveals it can fall into PaaS directly or an adjoining level. Below is a modified version of the service level diagram from above that highlights where serverless fits into the picture:

Serverless & cloud service levels

[IMAGE] Serverless & cloud service levels

 

In the diagram, "serverless" can be more precisely stated as "serverless compute," meaning compute platforms that can be used by customers without configuring servers explicitly. CSP providers certainly provide other types of serverless systems. For example, GCP Cloud SQL is a relational database-in-the-cloud or DBaaS (RDBMS database-as-a-service) offering where users don't requisition VMs per se, but it is not a platform you upload application code to for execution, so yes, it's serverless, but not serverless compute.

Products like Cloud SQL are shown in the diagram on the right as serverless but periphery to serverless compute products which are the focus here and in the middle hexagon. Most serverless compute platforms are highlighted service levels colored in the deeper golden yellow whereas the lighter yellow are serverless products that may or may not be compute platforms.

Google serverless platforms

Let's look at Google's four primary serverless compute platforms:

Logo Platform Description Service level
Google App Engine (GAE) app-hosting in the cloud PaaS
Google Cloud Functions (GCF) function-hosting in the cloud FaaS (Functions-as-a-Service) — subset of PaaS
Google Cloud Run (GCR) container-hosting in the cloud CaaS (Containers-as-a-Service) — between IaaS & PaaS
Google Apps Script (GAS) (vendor-customized) script-hosting in the cloud "Restricted PaaS" — between PaaS & SaaS
Google serverless platforms

 

Notice how all four platforms are PaaS-centric, either a PaaS product or belonging to one of the neighboring cloud levels. FaaS is a more recent subset of PaaS, serving functions in the cloud instead of entire applications. It arrived with AWS Lambda in 2015, then GCP Cloud Functions and Azure Functions the following year. Use of FaaS is generally simpler than the others because you don't need an entire app, just a function, and as such, there's typically no additional overhead like integrated databases or web frameworks.

CaaS is even newer, serving containerized apps in the cloud, debuting with Cloud Run/GCR in 2018. With containers, you do have to think about operating systems and lower-level software like web servers, so CaaS doesn't live at the PaaS level, but rather a half-step below, in-between IaaS & PaaS. Think of it as right next to Docker in the diagram. Even if you think you're just uploading code to GCR, GCP's Cloud Build system takes additional steps to bundle your code into a container, and adds it to a registry before it's deployed as a live service.

Similarly, a "restricted PaaS" system like Apps Script is indeed PaaS, but along with force.com (now Salesforce Platform), they rely on data that live at the SaaS level: GWS data for Apps Script, and Salesforce data for force.com. You're likely to only write code using these systems if you have that type of data, otherwise it makes no sense why you're not using more general-purpose PaaS platforms like those in the next level below. Regardless of whether you're using a regular or restricted PaaS system, recognize that you're using them to host SaaS apps with... yours!

Upcoming posts tackle each of Google's serverless solutions one at a time, starting with App Engine/GAE. The chart below illustrates all the compute options available in GCP. The serverless platforms we'll focus on are in the golden boxes while the rest are GCP's VM-based solutions. (Apps Script isn't in this chart because it's part of GWS not GCP.)

GCP compute platforms

[IMAGE] GCP compute platforms

 

📝 What about Jupyter Notebooks?
Jupyter Notebooks are an invaluable developer tool for data scientists, application sharing, visualization, big data integration, etc., and Google provides a variety of Notebook-hosting solutions. Aren't they serverless too?
Some platforms like CoLaboratory (Colab) from Google Research and Kaggle (formerly-independent platform acquired by Google) are indeed serverless because you don't have to think about infrastructure.
On the other hand, others, specifically those from Google Cloud, like Vertex AI Workbench and Colab Enterprise, require that you create (VM) instances to use them, even if they are managed for you by GCP.
All that said, Notebooks are a very specific type of application, and Notebook-only platforms aren't general-purpose enough to be part of this conversation.

Epilogue

The upcoming posts focus on each of the serverless products introduced above, but there are a few additional notes or commentary about serverless, specifically from GCP, to discuss before wrapping up.

"Serverless 1.0" vs. "Serverless 2.0"

Looking specifically at GCP serverless platforms, GAE was Google's first cloud product and the original ("OG") PaaS serverless platform, debuting the 1st-generation platform in 2008 via a blog post and video introduction. It is GCP's "Serverless 1.0" generation platform, supporting Python, Java, Go, and PHP as the first language runtimes.

GCF and GCR arrived nearly a decade later, comprising GCP's "Serverless 2.0" platforms. Google Cloud prefers new users explore these before considering GAE. However, to show ongoing innovation for 1.0 too, the GAE team launched its 2nd-generation platform around the same time, adding support for newer versions of the existing language runtimes, e.g., Python 3, but also adding Ruby and Node.js to the fold.

With all three of GAE, GCF, and GCR, you may be wondering which platform is right for you. Yes, you should look at the Serverless 2.0 platforms first, but choosing between them is typically a "use case" evaluation. Below is an illustration of the typical scenarios for each platform, and yes, there's a lot of "gray area" where you can pick from more than one option:

GCP serverless platforms

[IMAGE] GCP serverless platforms

 

⚠️ GAE 1st-gen and early 2nd-gen platform deprecation
Early in 2024, the GCP team initiated the eventual deprecation of GAE's 1st-gen and early 2nd-gen runtimes. It's not likely developers are starting new Python 2.7 or 3.7, Java 8, Go 1.18 (or older), PHP 5 or 7, Node.js 16 (or older), or Ruby 2 projects today anyway, so this makes sense. However, if you are maintaining applications using those runtimes, you are affected, so I wrote a separate post that covers what you need to know so you can take action today.

Nebulous serverless sample app

Deciding between GAE, GCF, and GCR may still be challenging even with looking at the above use cases, so my colleague and I created some video content to help you pick a serverless platform as well as how to design serverless apps.

Even if you made the right decision at the time, you may also wonder whether you can "switch" platforms if necessary. Around that time, I was also wondering whether it was possible to write apps that could be deployed to all three with no code changes. What do developers do when they're curious? They try to build it.

After getting through some speed bumps, I finally came up with a set of sample apps using a template that was flexible enough to be deployable to all 3 platforms without any code changes. In addition to the repo, I also wrote a post and produced a video for you to learn more about the first app, which as an added bonus, also demonstrates how to call GCP APIs from serverless, in this case, the GCP Cloud Translation API. The other sample apps that came later demonstrate how to call non-Cloud Google APIs, like Maps and Sheets.

Summary

Serverless compute platforms provide the "infrastructure boilerplate" necessary for developers to focus on the solutions they're building, helping organizations get to market first, increasing innovation speed. Whether your service goes viral or no one shows up, serverless platforms' auto-scaling up and down to zero can contribute to enormous cost savings when you're not getting traffic. Because serverless platforms perform this heavy-lifting, it allows more companies to get their solutions online fast, contributing to the continuing growth of public cloud services including serverless.

For all its benefits, not all use cases are meant to live on serverless for the long-term, specifically apps and services that get constant and predictable traffic on a 24x7 basis. These workloads won't experience the cost savings associated with auto-scaling to zero, where it is most likely more beneficial for them to run on VM-based compute solutions instead.

For everyone else, especially startups, there's serverless. Join me as I explore all four of Google's serverless platforms in upcoming posts. In the meantime, if you found an error in the post or have a topic you'd like for me to cover in the future, drop a note in the comments below!

Resources


^CB Insights (2018); MarketsandMarkets™ (2019 and 2020); Reports and Data (2020, 2021, 2022); Research Nester (2023)



WESLEY CHUN, MSCS, is a Google Developer Expert (GDE) in Google Cloud (GCP) & Google Workspace (GWS), author of Prentice Hall's bestselling "Core Python" series, co-author of "Python Web Development with Django", and has written for Linux Journal & CNET. He runs CyberWeb specializing in GCP & GWS APIs and serverless platforms, Python & App Engine migrations, and Python training & engineering. Wesley was one of the original Yahoo!Mail engineers and spent 13+ years on various Google product teams, speaking on behalf of their APIs, producing sample apps, codelabs, and videos for serverless migration and GWS developers. He holds degrees in Computer Science, Mathematics, and Music from the University of California, is a Fellow of the Python Software Foundation, and loves to travel to meet developers worldwide at conferences, user group events, and universities. Follow he/him @wescpy & his technical blog. Find this content useful? Contact CyberWeb or buy him a coffee (or tea)!

Top comments (0)