DEV Community

Cover image for FaaS != Serverless –> discussion
Mart Laul
Mart Laul

Posted on

FaaS != Serverless –> discussion

I'm trying to understand more about what the community thinks about serverless and FaaS. Here are my thoughts so far.

Since the introduction of the AWS Lambda in 2014 the event-driven, serverless computing has always been mentioned hand in hand with FaaS (Function-as-a-Service). However, I believe serverless is far from just being an AWS Lambda function, Azure function or Google Cloud function. It's like a whole ecosystem.

Serverless seems to be more like a specter where you have many different areas combined together. By analyzing AWS infrastructure, here are some of the services that in my opinion follow the serverless principles:

  • Compute units like Lambda
  • Managed Databases like DynamoDB, (pay-per-use, scaling by the cloud provider)
  • File storages like S3
  • Queues like SQS

As AWS already takes care of managing and scaling of the services, it is safe to say that these services are part of Serverless.

Even by looking at simple AWS Lambda use cases, we can see that none of them are only AWS Lambda based – they are combined with either Dynamo, S3, SQS or something similar.

What else would you consider as serverless?

Top comments (1)

Collapse
 
byrro profile image
Renato Byrro • Edited

How I see "serverless" is a system that abstracts away from me (developer) the need to think and care about provisioning, scaling and maintaining infrastructure.

SQS and S3 services predate Lambda by many years, but they fit the "serverless" category because they meet the key principle above.

I also think there are secondary - but still highly important - characteristics of a "serverless" system:

Pricing is ideally a variable of actual usage, which reduces waste with idle resources and reduces the financial risks for users.

There should be clarity around SLAs. Not only availability, but specific service metrics. If it's a database, how many queries can I make per second at max? If it's a compute platform, how many jobs can I run concurrently?

AWS, for example, provides clear service limits (and allow to increase them if needed, per request). It allows developers to model the application to how the infrastructure will behave. And we can trust is will behave that way almost all the time. That's very difficult to accomplish when we're deploying and managing our own infrastructure.