Serverless architecture has its benefit of on-demand executions. By definition, 1) the service owner no longer needs to maintain an all-time-live server, 2) it replaces the long-standing server host with low-cost on-demand resources, 3) it only executes a job only when triggered to do so
There are a few key concepts that distinguish serverless architecture from traditional server architecture. For example, serverless architecture is event-driven, and its components are often stateless, etc.
This post will dive a little bit into event-driven serverless architecture in the context of building web service APIs in AWS.
Traditionally, regardless of what type of server you prefer, it listens on a port and handles HTTP requests. The serverless architecture removes the concept of the server (at least from the end consumers' perspective) and replaces it with an event-based job.
In the serverless world, servers are abstracted from users. Instead, it exposes event queue (i.e., SNS/SQS) to you and allows you to attach a job (i.e., lambda) to the event queue. The serverless infrastructure makes sure a message in the queue triggers one execution of the lambda.
A unique but interesting challenge sooner will come into the picture: *who controls the event and how to scale? *
The trivial answer: the service consumer (your service client) controls how frequent the event will trigger, just like before, as to how the client calls your server APIs.
Yes, absolutely, with some nuances that make slight differences:
- The traditional server typically restricts caller in agreed TPS, so the client needs to handle API throttle and retry
- The serverless architecture allows the client to fire off the event messages and forget about it. Instead of throttling the client, the event processor carries more responsibilities to gracefully handle a large number of events/messages without losing the messages.
Not saying which way is better, but you should keep in mind the differences when choosing the service architecture.
The traditional server provides APIs such that the client often manages the call sequence. For example, in the use case of running write commands, the client may want to 1) Save Post v1.0, 2) Save Post v1.1, 3) Save Post v1.2 during fast edits on Dev.to. The client needs to chain the sequence of saves, wait for the server to respond for one request, and finally fire off another save command.
In serverless architecture, you can still do the above, utilizing Lambda+APIGateway. However, for the fun of discussing event-driven use case, let's imagine we did the above with just events.
Here is the fun challenge: 3 events arrive in incorrect order v1.2, v1.0, v1.1 to the queue. The lambda is completely stateless so it won't care about which logical version to process first. Rather simply, the lambda processes the messages without caring for the sequence and will save the v1.1 incorrectly.
Of course, we do not want that.
To resolve the sequencing challenge, one might choose a data store to keep all of the draft versions. When a lambda spins up to process, it will look into the data store to look for the latest version and only save that one. After resolving the challenge, the UX appears the same to the end-user : )
There are other ways to solve the challenge for sure, but you get the point. Serverless architecture often indicates stateless components. If your service is highly dependent on states, you probably want to utilize a workflow engine to accomplish the tasks rather than only relying on lambda + SQS.
When you first think about serverless, it appears light and easy to use (it is!). However, it does come with its own set of new challenges that the traditional backend engineers may not face.
One last fun fact that I want to share: whatever name or wrapper we want to call it,
serverless is nothing but a new coat on top of the traditional server architecture. Yes, believe or not, old-fashioned hard-core infrastructure engineers are still building traditional servers to host all the serverless components mentioned above(lambda, SNS, SQS). Don't treat serverless as some dark magic : )
Same applies to
cloud. You get my point :)