Introduction
Welcome to part one of my series "Going Serverless with Dart". The point of this series is to show you how to write and deploy serverless logic (also known as cloud functions) to the most popular cloud providers: AWS and GCP.
Before we start, I want to briefly explain what serverless (computing) actually means. I really like this explanation from CloudFlare:
Serverless computing is a method of providing backend services on an as-used basis. A serverless provider allows users to write and deploy code without the hassle of worrying about the underlying infrastructure. Note that despite the name serverless, physical servers are still used but developers do not need to be aware of them.
In other words, serverless computing allows you to use only the backend parts you actually need, without worrying about deployment and maintenance. You pay for the components by usage and they auto-scale, but also introduces the risk of getting a huge bill from spiked usage like a DDOS attack.
Amazon offers its serverless components through a service called AWS (Amazon Web Services). Serverless (aka cloud) functions on AWS are called Lambdas.
They are developed based on a technology called Firecracker, which is a virtualization technology that uses Linux Kernel-based Virtual Machine (KVM) to manage micro virtual machines (one for each function).
Right now, you may be thinking: "Dinko, I don't care about Firecracker, whats that got to with Dart"?
You are about to find out.
Dart Lambda Custom Runtime
Lambdas officially supports typical backend languages like Node.js, Python, Go, Java, and Ruby. If you want to use another language, you can deploying it by using a custom runtime.
In February 2020, AWS introduced a custom runtime for Dart:
You can access the runtime from pub.dev. You will notice one thing, the package has not been updated in the last 3 years (at the time of writing).
Thankfully, there is a fork of the package that provides a similar API but adapted to the null safety standards we are used to.
The runtime supports a lot of AWS services out of the box:
Application Load Balancer
Alexa
API Gateway
AppSync
Cloudwatch
Cognito
DynamoDB
Kinesis
S3
SQS
You can also register custom events.
To access all the different AWS services, you can use packages provided by Agilord. These packages are generated high-level APIs for AWS services.
With the custom runtime and all services accessible via packages, let's get to coding.
Writing your first lambda in Dart
Given all the possibilities with lambdas, I decided to show you how to write a lambda that reacts to a DynamoDB trigger. This is extremely useful for many use cases you might want to cover, especially if you are using AWS Amplify with Flutter, as it comes with DynamoDB by default.
ℹ️ You can find the full code on Github.
The event for DynamoDB update usually looks like this:
{
"Records": [
{
"eventID": "c4ca4238a0b923820dcc509a6f75849b",
"eventName": "INSERT",
"eventVersion": "1.1",
"eventSource": "aws:dynamodb",
"awsRegion": "us-east-1",
"dynamodb": {
"Keys": {
"id": {
"S": "12345"
}
},
"NewImage": {
"id": {
"S": "12345"
},
"name": {
"S": "Buy groceries"
}
},
"ApproximateCreationDateTime": 1428537600,
"SequenceNumber": "4421584500000000017450439091",
"SizeBytes": 26,
"StreamViewType": "NEW_AND_OLD_IMAGES"
},
"eventSourceARN": "arn:aws:dynamodb:us-east-1:123456789012:table/ExampleTableWithStream/stream/2015-06-27T00:48:05.899"
}
]
}
The data includes an array of changed events (Records
) with each record containing:
eventID
: Unique identifier for the eventeventName
: Type of operation (INSERT, MODIFY, DELETE)-
dynamodb
: Contains the actual data changes-
NewImage
: The new state of the item -
Keys
: Primary key information
-
The function we will write will modify the name of the item when its created (inserted) in a DynamoDB table.
The first thing we must do is define a handler function that will handle our events. The FunctionHandler
requires a function name and a callback with context which contains runtime information (like region, credentials, etc.) and raw event data.
The name you choose will be the same name you use for the deployment and management of the function later.
FunctionHandler get handleTodoCreation {
return FunctionHandler(
name: 'on-create-todo',
action: (context, event) async {
final dynamoDb = context.dynamoDb;
try {
final records = event['Records'] as List<dynamic>;
await Future.forEach<dynamic>(records, (record) async {
if (record['eventName'] == 'INSERT') {
final newImage = record['dynamodb']['NewImage'];
final todoId = newImage['id']['S'];
final currentName = newImage['name']['S'];
final modifiedName = 'Modified: $currentName';
await dynamoDb.updateItem(
tableName: 'todos',
attributeUpdates: {
'name': AttributeValueUpdate(
action: AttributeAction.put,
value: AttributeValue(s: modifiedName),
),
},
key: Key(hashKeyElement: AttributeValue(s: todoId)),
);
}
});
} catch (e) {
print('Error processing DynamoDB stream event: $e');
}
return InvocationResult(requestId: context.requestId);
},
);
}
The function iterates over the events and checks for INSERT events.
Then it extracts the the todo ID and current name. The name is modified by adding a "Modified:
" prefix and then updated in the DynamoDB.
DynamoDB client is provided the aws_dynamodb_api package by Agilord. We can create the client using the credentials available from RuntimeContext
. To make this easier to reuse, we can create an extension.
extension ContextExtensions on RuntimeContext {
DynamoDB get dynamoDb => DynamoDB(
region: region,
credentials: AwsClientCredentials(
accessKey: accessKey,
secretKey: secretAccessKey,
sessionToken: sessionToken,
),
);
}
Lastly, we must register the function handler using the invokeAwsLambdaRuntime
method.
Future<void> main(List<String> args) async {
await invokeAwsLambdaRuntime([handleTodoCreation]);
}
We can use the AWS Console to test the function, but first, we must deploy it.
Deployment
There are many ways to deploy an AWS Lambda, but the two main ones are uploading a .zip file or Docker image or using Infrastructure-as-Code solutions like Serverless Framework, AWS CDK, or Terraform.
I'll show you how to deploy using a .zip file uploaded through the AWS Console since it also teaches you a little bit about which permissions you need to grant.
ℹ️ If you are curious about IaC deployment, I would recommend using AWS CDK. This article explains really well how to use it for deploying a DynamoDB triggered Lambda.
⚠️ When deploying Lambdas in production, make sure you follow the best security practices to avoid your data being compromised or leaving your endpoint vulnerable to attacks like DDOS. Common techniques include securing the Lambda with AWS API Gateway and giving it minimal permissions needed.
Before starting the deployment, make sure you have a DynamoDB table named todos
. If you don't have have, go to DynamoDB in the AWS Console and create a new table named todos
with a partition key Id
. All other values can be default ones.
Enable event streaming in DynamoDB
First, we must enable DynamoDB Streams:
Go to your
todos
table in the DynamoDB consoleClick on "Exports and streams" tab
Under "DynamoDB stream details", click "Turn on"
Select "New and old images" as the view type (this gives you both before/after states)
Click "Turn on stream"
Create a .zip file
Next, we must create a zip file.
If you are using Mac or Windows, you must use Docker for this. If you are using Linux, you can just create a zip directly instead.
Make sure you have Docker installed. The easiest way is to install Docker Desktop.
I've prepared a runnable bash script which will take care of this step. Your project must also contain a Dockerfile which instructs docker how to build an image for AWS.
After running the script, you should have an output
folder with boostrap
and function.zip
files inside. This function.zip
is what you will upload to AWS.
Create a lambda and upload .zip
- Go to Lambda Console:
* Open AWS Console
* Search for "*Lambda*"
* Click "*Create function*"
- Configure basic settings:
* Select "*Author from scratch*"
* Use the function name you specified in the handler (`on-create-todo` in the example)
* For Runtime, select "*Amazon Linux 2*"
* Architecture: select *x86\_64* or *arm64*, mine was arm64
* Click "*Create function*"
- Upload your zip:
* In the Code tab of your function
* Click "*Upload from*" dropdown
* Select "*.zip file*"
* Upload your Dart zip file
* Click "*Save*"
- Configure the handler:
* In the Runtime settings section
* Click "*Edit*"
* Set handler to `on-create-todo`
* Click "*Save*"
Create a Dynamo DB trigger
In the Configuration tab, click "Add Trigger".
Then:
Select "DynamoDB" as the trigger source
Find the
todos
table and click the long linkMake sure "Activate Trigger" is checked ☑️
Configure Environment Variables
In the Configuration tab, find the "Environment variables" sub-tab and click "Edit".
Click "Add environment variable"
The key should be:
AWS_EXECUTION_ENV
The value should be:
AWS_Lambda_provided.al2
Click "Save"
This is needed for the RuntimeContext
of the lambda.
Grant permissions
Lastly, we want to grant permissions.
Click on the Configuration tab of your Lambda
Click on "Permissions"
Click on the role name listed under "Execution role"
Add the required policy:
* In the IAM role page, click "*Add permissions*" → "*Create inline policy*"
* Choose **JSON** and paste this policy:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:GetRecords",
"dynamodb:GetShardIterator",
"dynamodb:DescribeStream",
"dynamodb:ListStreams",
"dynamodb:UpdateItem"
],
"Resource": [
"arn:aws:dynamodb:<region>:<account-id>:table/todos/stream/*",
"arn:aws:dynamodb:<region>:<account-id>:table/todos"
]
}
]
}
```
You can find the region and the account id in the Lambda function ARN:
![How to find the region and account id for granting permissions for DynamoDB](https://cdn.hashnode.com/res/hashnode/image/upload/v1730719831761/cb2b6b43-9678-4d1a-9e9f-79a1100e1fd9.png)
Click "Next"
Give it a name (e.g.,
TodosStreamReadAccess
)Click "Create policy"
Testing the lambda
Firstly, you can test the lambda using the "Test" tab of of your Lambda and by providing the JSON specified at the beginning.
If something crashes or doesn't work, the AWS console error logs should be helpful. In case they are not, modify the lambda to output more errors and redeploy until you can solve the problem.
The real test is done by going the the DynamoDB.
Click "View all tables"
Click on the
todos
tableClick "Explore table items"
Click "Create item"
-
Create an item similar to this:
Click "Create Item"
Find the new items
Refresh the page
The item with the same id (
123456
) should now have aModified:
prefix in the name
Congratulations 🎉, you have successfully written and deployed your first Dart Lambda on AWS!
Conclusion
Well, there you have it! You've just built and deployed your first Dart function on AWS Lambda. While it might seem like a lot of setup at first, you now have the power to write your cloud functions in the same language as your Flutter apps. Pretty neat, right?
Sure, using Dart for Lambda functions isn't as straightforward as using Python or Node.js, and you'll have to deal with a custom runtime. But hey, being able to share code between your app and cloud functions might just make it worth the extra effort.
If you are using AWS Amplify with your Flutter app, writing Lambdas using Dart now makes you a Full-stack Flutter developer. You have the ability to write serverless logic and your cross-platform app using Flutter and Dart.
In the next part of this series, we'll take what we've learned and see how it compares to building serverless functions on the Google Cloud Platform.
If you have found this useful, make sure to like and follow for more content like this. To know when the new articles are coming out, follow me on Twitter or LinkedIn.
Top comments (0)