Every time I read about serverless it looks too good to be true. And we all know how much hype every "new thing" gets in our community ๐
So I'd like to ask you all what is the WORST thing that you experienced in using serverless?
The worst, and only the worst. No buts, no excepts, no goods allowed!
Cold start can be an issue.
I'm sure you know this, but sometimes you can run a script that continuously invokes it.
That kinda defeats the point of only running it when needed.
You run it every 30 minutes, just as the underlying container is about to close. You don't invoke it constantly.
I know that, but it doesn't seem to be a good practice
That helps but doesn't necessarily keep all the containers running, it typically depends on the scale and volume.
We have the same issues with AWS lambdas where I work. Sometimes 30 seconds just to "warm up". It's retarded!
Besides that, configuring lambdas was a pain, at least to me.
Yeah. It's a pain! We don't have any customers yet, but we're paying thousands just to set up the Lambda infrastructure, which I think is not the best way to go about things, but its not my choice. Ideally we should write a single stateless server that is always on, and when we need to scale then we can figure how to deploy to Lambdas. At the moment, we're completely coupled to Lambdas and we can't run our services outside of Lambdas, which make development a pain.
How much of an issue? Like milliseconds, seconds...?
Some seconds. I had some functions that took something between 4 and 5 seconds on cold start.
Shouldn't you break apart such functions?
It sounds like you're trying to do too much in a function.
What technology stack are you running?
Iโm running node. My functions are very small, doing a tiny service
Vendor lock-in is much higher and testing locally becomes significantly more difficult.
If we're talking lambdas, you can mitigate this by having your handler entry point be the only thing that's aware of the vendor environment. Have that function extract the data needed from the event and pass it into a separate function responsible for doing the actual work.
Testing locally is almost impossible, then you need to create things in aws for local development, this is very bad!
If you're using serverless framework, check out serverless-offline npm package. There are other packages like this which can emulate DynamoDB too. It's not perfect, but that's the closest you can get to it at this point I think.
Or you can use a docker based, open source serverless framework such as Fn Project that you can run on any cloud (or your laptop), so you can test locally, and deploy wherever you like.
It is more expensive than server if you use it for everything.
That was the catch I didn't realise till I did the math. Thought tbf, unless you're running a production site with really high volume, I doubt in most cases you'll reach the point where serverless cost > cost of running an instance
Serverless is nice if it's done right, but doing it wrong is an expensive mistake. My company had a single physical location hitting a previous devs serverless setup, costing us about $1400 a month. As he was transitioning out, company execs changed and now have a plan to open 10 more in a couple of months. No way we can justify sustaining a serverless model for what we're doing, at least not easily.
My current project is fixing all of that for a more efficient design, non serverless. So far I've done most of it happily on a $10/mo digital ocean instance. I don't expect that it will need any more processing or memory when it's completed, just more storage for logs and update distribution. And time, lots of time redoing a log of things that serverless makes easy.
That smells like a poorly implemented serverless setup or a lack of proper caching. I've run production services with heavy traffic for pennies a month on serverless.
Not controlling the environment. I had an issue where something in the runtime on lambda changed and the version of boto3 changed. This conflicted with another library I was using and everything crashed. It was like a silent library upgrade
This is fixable though. The default is boto3 and a few other libraries updated as needed opposed to nothing out of the box.
To fix the auto update, create your own layer and install your exact version of boto3.
I used Zappa for deployment. I do not think there is a way to block auto update of libraries. The library was fine for like a year
Think of lambda layers like docker image layers. A new layer overwrites the base layer. If your new layer has boto3 files, those overwrite the base lambda layer boto3 files.
In aws lambda is follow and search logs, I can't use cloudwatch logs it is the worst
You spend half the day writing glue code between different services and configuring the "infrastructure" becomes a nightmare. If I could, I'd pick ssh to a good ol' server over serverless every time.
Make that half-a-month ๐
Time outs for long running scripts.
We experienced this where I work too. :/
Code Complexity.
Handler code and infrastructure code are all over the place in most code bases. It can be extremely difficult to walk through how an event goes through all of the services.
A single app with some in memory queues and rest handler would is far simpler to read.
My experience is that most cost and complexity issues can be overcome but the security is always an issue. Not the outside security but the internal development. For instance for a developer to create a lambda that works, they need not only all lambda permissions but the ability to create IAM roles, attach policies etc, upload to S3 if they aren't developing in console, possibly create the S3 bucket they will use etc.
Which means. Now you have to develop complex gates all over on their users or roles they assume. The newly released feature of ABAC in IAM (AWS specifically) should help this complexity but adds others.
Hello! That's called vendor lockin. :)
People will try to take their existing expertise and build a project the '3-tier way' not realizing that's the wrong way to build a serverless app and then complain incessantly that serverless is terrible.
That and logging...
And what is the serverless way, how is it different?
I need to provide my own log server to have lagless debugging. I use to think it just took a while for lambda to update, once I finally got a normal *ing log server installed in the environment and syslog libraries in code shipping everything to that I realized cloudwatch is just a piece of * and this really could be nice and realtime but it isnt.
1- Cold start
2- Function execution time
3- SDK tuning to fit with serverless
4- If you are using AWS....then you have to deal with huge documentation
5- Tuning everything related to lambda function
The only issue that i think is just having a deeper understanding of the architecture itself. Authentication and Policies just require more detailed concepts for them to work.
Interesting... Would you be able to give a simple example?
Sure, I`ll upload an article. So keep your self posted
Version control & deployment