Serverless topic is becoming more and more hot these days. The idea behind this architecture style is that a large part of the headaches related to the server’s operational responsibilities can be delegated to a 3d-party provider, so that developers can focus entirely on writing code aligned with the business goal the application serves.
In this post, I’m going to illustrate how ridiculously easy is to go from don’t-know-even-where-to-put-my-hands, to create a simple API with AWS Lambda:
AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume - there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app
What I’m not going to do here, instead, is illustrating how to do it properly.
Why..?
The fact is that, from one side, there is tons of staff that I’m going to knowingly omit (like how to handle routing properly or how to configure serverless webpack and so forth) and from another, what could look like a smart solution for me, could be a really bad one for some other, depending on the project needs. So, I come up with the idea of keeping it as generic as possible and try to write an introduction to the subject. And the topic is so vast that it’s probably best to chunk it anyway.
Specifically, what I would like to do is to go through the process of deploying a lambda using the Serverless Framework that, as its own documentation says:
helps you build serverless apps with radically less overhead and cost. It provides a powerful, unified experience to develop, deploy, test, secure and monitor your serverless applications.
Serverless Framework is so easy to use that would be equally easy to miss some of the things it does for us under the hood. That is why in this post I'm also going to focus on the AWS console side of the process.
Now, before getting to the juicy part, there’s a couple of things we want to take care of..
1. Create an AWS Account:
First thing to do is to create an AWS Account, if you don't have one already.
2. Create AWS Access Keys:
- Go to Services -> IAM:
- Inside your IAM page, Go to Users -> Add Users;
- Type the new user name and, since we're going to need an access key ID and a secret access key, we can give the new user a Programmatic access -> Next: Permissions
- Inside the Set Permissions page, we grant the user administrator access that provides full access to AWS resources and services:
- Add Tags: optional - we can ignore this part for now (unless you want to add user's information like email, job title, etc);
- Review and Create User;
- Save Credentials. You can download the .csv file or copy directly the access key and the secret:
- We can open up the terminal and type:
serverless config credentials --provider aws --key {Access_key_ID} --secret {Secret_access_key}
And we should receive a message to inform us that our AWS access keys were successfully stored. This step is going to be necessary when we will want to deploy our Lambdas to AWS.
3. Start our first serverless project:
So let’s create a new directory and let’s say we want to call it: my-app-serveless. You got a better name, go ahed and use it (just make sure to keep it while you’re going through the steps below).
We'll want also to globally add the serverless framework itself. It’s on npm, so:
npm add serverless -g
And now we want to create the actual project, the relative configuration yml, our first function etc.
We don’t need to do it from scratch, we let Serverless do its magic for us. How? Well, for creating a new project we use the command create followed by one of the following options:
a. the template name we want to use;
b. the template url;
c. the local path to your template.
I’m going to pass in a template name, which is composed of the provider name and the runtime environment I would like to use. They are respectively AWS and Nodejs separated by a dash. So it’s going to be aws-nodejs; * you can check all the supported providers and template list on serverless documentation.
So, if we're inside the directory where the project is going to be created we can just launch:
sls create -t aws-nodejs
Otherwise we can use the path flag -p / -path to specify it.
Wait a sec... sls? What is that? And what is happening here?
sls is short for serverless. What we’re basically doing here is pretty simple: we are asking the framework to create for us a new serverless project inside the directory, using an already existing template named aws-nodejs, which also means that we’re letting serverless know that we want to deploy our code to AWS cloud provider. If we wanted, for instance, to deploy to Azure, we would have to use the appropriate template for that.
Now, looking at our directory, we've probably already noticed that serverless has done a bunch of staff under the hood. We got a .serverless dir, a configuration serverless.yml and a hello handler (inside the handler.js file) already set up. How cool is that? Basically we could already deploy it. But we won’t, not for now.
First, we need to take a look at the serverless.yml file. We can ignore for now all the comments and focus on the not-commented part.
This is how it looks like:
service: my-new-cool-serverless-project
provider:
name: aws
runtime: nodejs10.x
functions:
hello:
handler: handler.hello
service: This is the name space for the group of lambdas you’re going to be creating. You can totally rename with what you believe is most suitable (by default it takes the name of the directory the service has been created in).
provider -> name and runtime: AWS and nodejs - no surprise there, am I right? At the end, that’s what we’ve asked passing as the template 'aws-nodejs'. There is a couple of things we could add here like the region (if not specified, defaults to US East (N. Virginia)) and the stage (that defaults to dev). Since I like to have everything clear in my .yml, I’m gonna go ahed and add:
region: eu-central-1
stage: dev
functions: that is followed by our lambdas. Right now we got just one function: hello that can be renamed however we wish. Now, if you look at its handler, it points at the handler.js file that sls created and the exported function hello.
You can think of the handler as the name of the actual function that is going to be deployed on AWS platform. We can rename it as we wish inside our yml file, as long as we keep the handler path relative to the actual function's source. So, if we were to move the handler to a source directory /src we would have to rewrite it:
functions:hello: // the name of the lambda functionhandler: src/handler.hello // where the function source code is and the name of the function in the code
Pretty straight, right? Ok, let’s take a look at the handler function itself. At time of writing it looks like:
"use strict";module.exports.hello = async event => { return { statusCode: 200, body: JSON.stringify( { message: "Go Serverless v1.0! Your function executed successfully!", input: event }, null, 2 ) }; // Use this code if you don't use the http event with the LAMBDA-PROXY integration // return { message: 'Go Serverless v1.0! Your function executed successfully!', event };};
As you can see, this function takes in a parameter event and returns an object with status code and stringified body. About the Event: if you go and look for it in the serverless documentation for AWS provider, you’ll see that it’s described as “anything that triggers an AWS Lambda Function to execute”. It actually makes sense since we know that lambdas are event-triggered. You can take a look at all the kind of events our lambda can subscribe to in AWS, and we’re going to do it right after our deployment. And probably, the event everybody is looking forward to subscribe their lambda to is the AWS API Gateway HTTP endpoint request.
* FYI, our hello function actually also accepts context and a callback as parameters, but I won’t touch them here as they’re not relevant to what we’re trying to accomplish;
Now, even though we haven't written a single line of code yet, we can already move on testing our function - and the good news is that we don’t need to deploy it first. We can test it locally to check that everything’s fine.
We’re going to use the command invoke local followed by the name of the function. So if you haven’t renamed it, the command will look like:
sls invoke local -f hello
At this point you should see on your terminal exactly what our lambda is returning.
Why are we testing it?
Well, the invoke local command is a great tool to check that our lambda is running. We just need to be aware that it won’t always work out of the box, like it does in this situation. In some cases we will need to pass the actual event the lambda is going to respond to. We're going to check all the kind of events lambdas can subscribe to a bit ahed in this post. For now, I can anticipate that we can copy the specific event type, whatever it is, from AWS, pasting inside a JSON or YAML file and pass it to our lambda using the -p flag.
Now that we know our lambda is running, we are actually ready to deploy it.
How?
We can check the possibilities we have by typing:
sls deploy —help
As you can see we've got different options, from deploying the entire serverless service to deploy single functions; in this case, we want to deploy the service, so we’ll just type in our terminal:
sls deploy
* serverless won't be able to deploy the service if you haven't set up credentials already - in case: go back to Create AWS Access Keys section.
The process is going to be pretty fast since we’ve got just one function to deploy, but it’s going to take more time once the project grows.
If we wanted to check the deployment status, how everything's going, if anything's failed, there are two different ways to do it:
- Going into your AWS console -> Services -> Cloudformation: here you'll find your stack by the name you gave it in your service (in the yml file). Clicking on it, you’ll be able to check all the updates. (Small piece of advice, be sure to be on the correct region - specifically, the region you passed to your yml configuration file and region in your AWS console needs to match - you can change it from the navigation bar on top, near your login name);
- Running the deploy in verbose mode:
sls deploy -v
Once the deployment's completed, we can finally check our Lambda. From your AWS console, just go back to Services -> Lambda and you should be able to see your function under the functions list.
The name of the lambda is the result of the service name, the staging environment, and the function name. We can click on it:
Inside our lambda we see 2 tabs: Configuration and Monitoring.
Their name seem pretty intuitive. The first one is, indeed, for Configuration. You can check here the function's code, runtime, handler name, and all the stuff you’ve set up inside your serverless configuration file.
Here you could also add Environment variables, modify basic settings like memory allocation for the specific lambda (defaults to 1024 - and it can go up till 3008MB - and remember that although increasing it can positively impact your performance, it’s also going to affect costs) and timeout.
On this regard, since memory size and function execution time play (along with the cost per function's invocation) a key role in the cost calculation, a good way to minimize them is to write as quick and as low RAM consuming functions as possible. It is also a good practice to set the RAM quota and timeout to a low number on your testing AWS instance (to minimize the risk of deploying a function that e.g. falls into an endless loop and increases the resource usage and the subsequent costs that we need to pay).
You have the possibility to change settings here or in your yml file. Whatever the decision is, just remember to keep it consistent; all the configurations should be kept in one place only.
Inside Monitoring tab, instead, we can check the CloudWatch Metrics, relative to the function’s performance - so whenever we want to check how our lambdas are doing, this is the place we want to be in.
Looking up at the top right corner of the page, there’s a Test button. Yup, I said before we can test our lambda locally using the command invoke local and I also said that sometimes we will need to pass the event our lambda has subscribed to in order for the test to work, remember? Well, clicking on the Test button will open up a series of possibilities for us:
- To create a new test specifying the Event we want to pass to our function, and test it;
- To copy the event we want to pass to our lambda when we test it locally;
As you can see, there’s really a lot of events you can use, from Amazon CloudWatch to Alexa Smarthouse - Control. Clicking on any of this event will prompt the source code, that you can copy/past in json/yml file.
If we choose now to test our lambda we would be free to pass any event we like, give it a name, and create our first not-really-useful test. Done? Alright, now we watch the test succeed. Niice!
Another interesting thing is that if you go to the S3 service on your AWS console, you will also see the bucket name of your service (as: your service name, stage and serverlessdeploymentbucket followed by some key). At the end of the day, that's what serverless deployment actually is: packing the service/function to a .zip and uploading it to the provider specified in the yml file (AWS in our case).
That's right, here's another thing Serverless Framework is doing for us (instead of having to manually zip and upload the service by ourselves).
If, at any point, we need to remove it from s3, it's going to be better not to do it directly from here. The best way to do it it is using:
sls remove
that is going to remove the bucket and stack, allowing us to avoid out-of-sync kind of situation in which we would end up having to remove and re-deploy everything.
Why is it useful to know our lambdas are stored here?
In case anything goes wrong with them, we can always come here, download them, and check if there is any error in the source file.
Ok, I believe now you have a pretty wide idea of where you lambda-related stuff is on the AWS console and it’s time for us to finally start building our API.
There are, again, different ways of doing it. Theoretically we could use Amazon API Gateway which is, as the name suggest, a gateway for the api, which would be responsible for routing our http events to our lambda functions. We could, but we don’t have to. There’s a simpler way. How does it sound using our configuration file for achieving the same result?
We’re going to subscribe to the event called Amazon API Gateway Proxy (you can check it in Lambda -> function_name -> test).
Using the proxy implementation of API Gateway (used by default by serverless), is going to give our lambda the power to decide how the response should look like. Without it, we would need to go inside the Amazon API Gateway and configure there what the response should look like - and that’s probably not what we want to get into.
To set up the gateway locally, we have to go back to our yml file. And this time we’re going to hook our lambda to an http event.
functions: hello: handler: handler.hello events: - http: path: /hello method: GET
Or we can use the shortcut:
functions: hello: handler: handler.hello events: - http: GET /hello
Now, whenever someone does a get request to
wherever_your_gateway_is_hosted/hello
it’s going to execute handler.hello for hello function, it’s going to pass the event that AWS gateway formatted as the first argument, it’s going to wait for a response or an error and respond back to whoever made that request.
Now about running this function, we won’t invoke it locally as we would not be able to check if it’s working with an api. What we need instead is a way to run an offline server that can execute our lambda..
We basically need to imitate the api gateway server on our machine. Since our functions are serverless and they’re going to be executed in a different context, the api gateway on our machine is going to execute our lambdas for us just like it would on AWS.
Let’s do it. We need to add serverless-offline plugin into our dependencies and add it in our configuration:
yarn add serverless-offline// and inside our serverless.yml let's add: plugins: - serverless-offline
Last thing we want to take care of before seeing the results of our work is modify what we're returning from our lambda:
module.exports.hello = async event => {
return {
statusCode: 200,
body: JSON.stringify(
{
message: "Hey there, Welcome!",
input: event
},
null,
2
)
};
};
We need to stringify the body - that's not something is going to be done for us.
And now we can just run:
sls offline
and go check if it works on: localhost:{PORT}/api. If you didn’t set up a port it defaults to 3000. You can set it up via -P flag.
If everything works correctly you should see:
Your message and event are printed on the page.
If you want to add a parameter name to greet, you'd have to add it in the configuration file, so the event would become:
- http: GET /hello/{name}
And we can take the parameter from our event.pathParameters.name, so that our lambda would become:
module.exports.handler = async event => {
const {
pathParameters: { name }
} = event;
return {
statusCode: 200,
body: JSON.stringify(
{
message: `Hey there ${name}, Welcome!`,
input: event
},
null,
2
)
};
};
Now, following this logic you could add a new handler to take care of another end point, and so on... I am going to leave up to you the kind of approach you will decide to experiment with.
Those that I have been playing with so far are:
- using the api gateway to proxy to one lambda and that one function to proxy to express and having express to define the routes;
- using the same handler for all the lambdas and handle routing based on the functions' names (that is available in context passed to the handler);
Long story short...
We talked about the Serverless framework - how to set up a project, what it does under the hood, how to modify the serverless.yml based on your need. We went more into details on lambdas and how to test them, how to work / where to find what you may need while working with them inside the AWS console, completing it with the creation of a simple working API.
And yet, there are parts I've not even had time to touch, like lambdas's cold starts and how they can affect costs, how to build lambda functions with Webpack or how to setup a serverless application with a GraphQL endpoint. But, I am going to try to keep writing on this topic and sharing details or info that I find useful for a general deeper understanding! :)
Top comments (0)