DEV Community

What's the WORST thing about serverless?

Manuele J Sarfatti on November 26, 2019

Every time I read about serverless it looks too good to be true. And we all know how much hype every "new thing" gets in our community ๐Ÿ˜
So I'd like to ask you all what is the WORST thing that you experienced in using serverless?

The worst, and only the worst. No buts, no excepts, no goods allowed!

Collapse
 
morficus profile image
Maurice Williams

Vendor lock-in is much higher and testing locally becomes significantly more difficult.

Collapse
 
rcoundon profile image
Ross Coundon

If we're talking lambdas, you can mitigate this by having your handler entry point be the only thing that's aware of the vendor environment. Have that function extract the data needed from the event and pass it into a separate function responsible for doing the actual work.

Collapse
 
blouzada profile image
Bruno Louzada

Testing locally is almost impossible, then you need to create things in aws for local development, this is very bad!

Collapse
 
suryaprasad7500 profile image
Surya Malempati

If you're using serverless framework, check out serverless-offline npm package. There are other packages like this which can emulate DynamoDB too. It's not perfect, but that's the closest you can get to it at this point I think.

Thread Thread
 
ewanslater profile image
Ewan Slater

Or you can use a docker based, open source serverless framework such as Fn Project that you can run on any cloud (or your laptop), so you can test locally, and deploy wherever you like.

Collapse
 
mbackermann profile image
Mauricio Ackermann

Cold start can be an issue.

Collapse
 
trusktr profile image
Joe Pea • Edited

We have the same issues with AWS lambdas where I work. Sometimes 30 seconds just to "warm up". It's retarded!

Collapse
 
mbackermann profile image
Mauricio Ackermann

Besides that, configuring lambdas was a pain, at least to me.

Thread Thread
 
trusktr profile image
Joe Pea

Yeah. It's a pain! We don't have any customers yet, but we're paying thousands just to set up the Lambda infrastructure, which I think is not the best way to go about things, but its not my choice. Ideally we should write a single stateless server that is always on, and when we need to scale then we can figure how to deploy to Lambdas. At the moment, we're completely coupled to Lambdas and we can't run our services outside of Lambdas, which make development a pain.

Collapse
 
mjsarfatti profile image
Manuele J Sarfatti

How much of an issue? Like milliseconds, seconds...?

Collapse
 
mbackermann profile image
Mauricio Ackermann

Some seconds. I had some functions that took something between 4 and 5 seconds on cold start.

Thread Thread
 
umelf profile image
Molese Kekana

Shouldn't you break apart such functions?
It sounds like you're trying to do too much in a function.
What technology stack are you running?

Thread Thread
 
mbackermann profile image
Mauricio Ackermann

Iโ€™m running node. My functions are very small, doing a tiny service

Collapse
 
arswaw profile image
Arswaw • Edited

I'm sure you know this, but sometimes you can run a script that continuously invokes it.

Collapse
 
avalander profile image
Avalander

That kinda defeats the point of only running it when needed.

Thread Thread
 
arswaw profile image
Arswaw • Edited

You run it every 30 minutes, just as the underlying container is about to close. You don't invoke it constantly.

Collapse
 
mbackermann profile image
Mauricio Ackermann

I know that, but it doesn't seem to be a good practice

Collapse
 
rcoundon profile image
Ross Coundon

That helps but doesn't necessarily keep all the containers running, it typically depends on the scale and volume.

Collapse
 
arswaw profile image
Arswaw

It is more expensive than server if you use it for everything.

Collapse
 
jkhaui profile image
Jordy Lee

That was the catch I didn't realise till I did the math. Thought tbf, unless you're running a production site with really high volume, I doubt in most cases you'll reach the point where serverless cost > cost of running an instance

Collapse
 
pyr02k1 profile image
John

Serverless is nice if it's done right, but doing it wrong is an expensive mistake. My company had a single physical location hitting a previous devs serverless setup, costing us about $1400 a month. As he was transitioning out, company execs changed and now have a plan to open 10 more in a couple of months. No way we can justify sustaining a serverless model for what we're doing, at least not easily.

My current project is fixing all of that for a more efficient design, non serverless. So far I've done most of it happily on a $10/mo digital ocean instance. I don't expect that it will need any more processing or memory when it's completed, just more storage for logs and update distribution. And time, lots of time redoing a log of things that serverless makes easy.

Thread Thread
 
myrcutio profile image
ben.balentine

That smells like a poorly implemented serverless setup or a lack of proper caching. I've run production services with heavy traffic for pennies a month on serverless.

Collapse
 
blouzada profile image
Bruno Louzada

In aws lambda is follow and search logs, I can't use cloudwatch logs it is the worst

Collapse
 
renegademaster profile image
RenegadeMaster

Not controlling the environment. I had an issue where something in the runtime on lambda changed and the version of boto3 changed. This conflicted with another library I was using and everything crashed. It was like a silent library upgrade

Collapse
 
tayloryork profile image
Taylor York

This is fixable though. The default is boto3 and a few other libraries updated as needed opposed to nothing out of the box.

To fix the auto update, create your own layer and install your exact version of boto3.

Collapse
 
renegademaster profile image
RenegadeMaster

I used Zappa for deployment. I do not think there is a way to block auto update of libraries. The library was fine for like a year

Thread Thread
 
tayloryork profile image
Taylor York

Think of lambda layers like docker image layers. A new layer overwrites the base layer. If your new layer has boto3 files, those overwrite the base lambda layer boto3 files.

Collapse
 
avalander profile image
Avalander

You spend half the day writing glue code between different services and configuring the "infrastructure" becomes a nightmare. If I could, I'd pick ssh to a good ol' server over serverless every time.

Collapse
 
dizid profile image
Marc de Ruyter

Make that half-a-month ๐Ÿ™„

Collapse
 
tayloryork profile image
Taylor York

Code Complexity.
Handler code and infrastructure code are all over the place in most code bases. It can be extremely difficult to walk through how an event goes through all of the services.

A single app with some in memory queues and rest handler would is far simpler to read.

Collapse
 
ebpetway profile image
ebpetway

Time outs for long running scripts.

Collapse
 
trusktr profile image
Joe Pea

We experienced this where I work too. :/

Collapse
 
codykochmann profile image
Cody Kochmann

I need to provide my own log server to have lagless debugging. I use to think it just took a while for lambda to update, once I finally got a normal *ing log server installed in the environment and syslog libraries in code shipping everything to that I realized cloudwatch is just a piece of * and this really could be nice and realtime but it isnt.

Collapse
 
netnavt profile image
Brad Duhon

My experience is that most cost and complexity issues can be overcome but the security is always an issue. Not the outside security but the internal development. For instance for a developer to create a lambda that works, they need not only all lambda permissions but the ability to create IAM roles, attach policies etc, upload to S3 if they aren't developing in console, possibly create the S3 bucket they will use etc.

Which means. Now you have to develop complex gates all over on their users or roles they assume. The newly released feature of ABAC in IAM (AWS specifically) should help this complexity but adds others.

Collapse
 
trusktr profile image
Joe Pea

Hello! That's called vendor lockin. :)

Collapse
 
gregorypierce profile image
Gregory Pierce

People will try to take their existing expertise and build a project the '3-tier way' not realizing that's the wrong way to build a serverless app and then complain incessantly that serverless is terrible.

That and logging...

Collapse
 
mjsarfatti profile image
Manuele J Sarfatti

And what is the serverless way, how is it different?

Collapse
 
dizid profile image
Marc de Ruyter
  1. CORS
  2. "Works on localhost"
  3. Forgetting where your cloud functions are that you easily deployed - last year.
Collapse
 
poode profile image
Abdul-Azeem Mohammed

1- Cold start
2- Function execution time
3- SDK tuning to fit with serverless
4- If you are using AWS....then you have to deal with huge documentation
5- Tuning everything related to lambda function

Collapse
 
th3n00bc0d3r profile image
Muhammad

The only issue that i think is just having a deeper understanding of the architecture itself. Authentication and Policies just require more detailed concepts for them to work.

Collapse
 
mjsarfatti profile image
Manuele J Sarfatti

Interesting... Would you be able to give a simple example?

Collapse
 
th3n00bc0d3r profile image
Muhammad

Sure, I`ll upload an article. So keep your self posted

Collapse
 
colemayne profile image
Coleman Beiler

Version control & deployment