Haha, don't go with the literal meaning of the name. There are servers 🤦, just you won't have to manage the way you typically do. The cloud providers will dynamically manage the allocation of resources.
You just write functions and deploy to the cloud. The cloud providers will store your functions in storage. When you ask to run the function, they will pull from storage, run it and done.
It will only charge for the computing time of your function. So, no charge when your code is not running 🤑
It depends on the cloud providers. Here is list of languages supported by the cloud providers.
- AWS: Java, Go, PowerShell, Node.js, C#, Python, and Ruby
- Google Cloud: Go, Node.js, Python
- Azure: PowerShell, Node.js, C#, Python, F#, Java
No, your code will run on the versions they support. You can find the supported version from the respective providers' documentation. Like, for AWS you can find here.
- https://serverless.com, is provider agnostic and supports most of the languages.
- https://www.zappa.io/ for Python
- Serverless Application Model for the AWS ecosystem
- https://serverlesslibrary.net : libraries by Azure Serverless Community
If servers run on request instead of being ready all the time, won't it take time to respond to my request? 😴
Yes, you may experience latency when you trigger a function.
When you call the function for the first time, it takes time to set up the container and bootstrap the application. It is called a "cold start". But for subsequent requests, the cloud provider tries to reuse the container. And it depends on cloud providers and other infrastructure factors for how much time it will keep the containers live. So, they always advise writing code in a way that doesn't assume the container will be reused.
The 'cold start' time depends on various factors like:
- Programming language - statically typed language takes more time
- When you update the functions and deploy, it destroys the containers
- How frequently your function is being called and keep the container "warm".
- The code size, etc.
It is not a huge problem in most of the cases. But still, if you want to avoid the cold start, you can schedule one function which keeps calling the function you want to keep warm, after every certain amount of time.
They can be triggered in many ways:
- Once deployed, you will get a dedicated HTTP URL, using which we can call
- You can configure the HTTP URLs to use as a REST Apis
- The function can be scheduled to run at a specific time
- On other cloud infrastructure events like on data change event of storage, etc.
I would recommend, you start to experiment with serverless if you haven't already. The easiest way is, making the backend functions of your side projects, serverless. And if you are yet to implement your first idea, the serverless will really helpful to remove the server management headache and ship your idea quickly 🚀