DEV Community

Discussion on: Concerns that go away in a serverless world

Collapse
 
rrampage profile image
Raunak Ramakrishnan

Hi Paul,
Great article. In my previous workplace, I set up an on-the-fly image resizing job using AWS Lambda.

  • My initial road-block was figuring out the cost. I had to estimate the number of month image uploads which were to be resized and take into account the cost of AWS Lambda execution, API Gateway request and S3 requests, size of each request and peak frequency. After running some calculations, I found that it was much cheaper to run through Lambda rather than using dedicated servers for our scale.
  • Another issue was warm-up time for Lambda. For a strict real-time flow, it can add unwanted latency due to the cold start of machine. For my use-case, the latency was within acceptable limits.
  • Optimizing resource use was relatively easy. We could A/B test functions with lower CPU/Memory limits and check the number of errors.
  • The maintenance became easy. In terms of code base, there is a single node.js project. The configuration is in environment variables. We can easily have multiple sets for production, staging and testing.

That said, many of the problems with setting up servers are one-time heavy with lesser effort required to update. To a newcomer, wrapping your head around the plethora of services AWS provides is daunting. It is a soup of weird names. But once you have a stable setup, it is easy to add/remove new servers, whitelist ports, generate SSH keys for new team members. It also pays off in the long run because not all jobs lend themselves to serverless.

Collapse
 
paulswail profile image
Paul Swail

Hey Raunak, thanks for sharing your experience. 🙂
It seems like you have taken a very pragmatic approach there 👍

You are absolutely correct to say that not all jobs lend themselves to serverless, and limitations such as cold starts may prove to be deal breakers for certain types of apps.

However, I honestly believe that these types of apps are the exception and not the norm.

In an organisation, you don't need to pick a side: "We only do serverless apps" or "we only do containers/VM-based apps". You can (and should) make the architecture decision on a per-application basis.

Now that I have a breadth of experience in building and running both serverless and container-based apps (on AWS), my default starting point would always be serverless. This is due to the faster feature velocity it enables and that scaling is (almost) completely managed for me. If there is a specific task which seems like it won't work in a Lambda, say, then a container-based solution can be used for it.

Collapse
 
rrampage profile image
Raunak Ramakrishnan

Do you have any recommended resources for porting standard CRUD apps to serverless?

Thread Thread
 
paulswail profile image
Paul Swail

You could check out these 2 articles from Yan Cui:

I do find that the conversation around porting existing (brownfield) apps to serverless is much more complex than it is with greenfield apps. Lots of different things to consider before making a decision if it's worth doing.

Thread Thread
 
rrampage profile image
Raunak Ramakrishnan

Thanks a lot for the links. They are very helpful.

Thread Thread
 
paulswail profile image
Paul Swail

You're very welcome