DEV Community

Cover image for Take your Serverless Functions to new speeds with Appwrite 0.13
Bradley Schofield for Appwrite

Posted on • Edited on

Take your Serverless Functions to new speeds with Appwrite 0.13

🙋 What are Cloud Functions?

Cloud functions are a way of extending a cloud provider’s services to execute your code and add functionality that did not previously exist. Quite a few services have this functionality! Some examples include AWS Lambda, Google Cloud Functions, and Vercel Functions.

Amazon led the charge into cloud functions when they introduced AWS Lambda back in 2014, with Google following up four years later, making Google Cloud Functions public for all in 2018. All of that brings us to today, where Appwrite is introducing generation 2 of Appwrite Functions with a significantly improved execution model.

đź’» Architecture Overview

So, what’s changed compared to previous versions? Well, as noted, our execution model has been wholly re-envisioned with speed in mind.

We first need to know how the original execution model worked to understand the changes.

0.12 Execution model flowchart

The diagram above shows how the original function execution model worked. It would do this for every single execution. So essentially with each execution we were spinning up a new Docker container. This flow takes plenty of time and can put quite a lot of stress on the host machine.

Now compare this to 0.13’s model:
0.13 Execution model flowchart

The updated model is a lot more complicated (even though this is a significantly simplified graph) and is no longer spinning up a new runtime with every execution. Not only that, but each runtime now has a web server inside of it to handle executions. Instead of using command-line executions, we now use HTTP Requests, making executions much faster. Using this method of execution does mean a couple of changes for the users. For instance, users must enter their script's filename instead of the entire command. They also now have to export their function. More details on this can be found in our functions documentation.

📦 Dependency Management

Remember having to package your dependencies with your function code manually? Well, no more! With Appwrite 0.13, we have introduced a build stage into functions that automatically install any dependencies you need. The build stage is also used to build the compiled runtimes ready for execution. Specific steps may be required for some languages, so we recommend checking our updated functions documentation.

⏳ Benchmarks

Thanks to our new execution model, functions are now over 10-times as fast as before! We have even introduced the ability to use compiled languages for the first time in Appwrite, introducing Rust and an improved Swift runtime into the mix with some awe-inspiring execution times. Why don’t we check out some solid benchmark numbers comparing 0.12 to 0.13 in execution time and scale?

Our first test is a simple “Hello World!” response from NodeJS 17.0 using the asynchronous execution model. We use the asynchronous method to compare the two versions because 0.12 does not support synchronous functions, and comparing asynchronous with synchronous would not be fair. We use k6 as the benchmarking tool that runs on our local device for all scripts. To properly benchmark executions, we prepare a proxy that can freeze requests and count how many requests hit the proxy in a specific timespan. The flow looks as follows:

  1. Create Appwrite function tag and activate it
  2. Spin up a proxy server with request freezing enabled
  3. Run the k6 benchmark for 60 seconds
  4. Unfreeze proxy server
  5. Wait for all executions to finish

With this setup, k6 will create as many executions as possible in 60 seconds. Only the first execution will start during this time but won’t finish until the proxy server is frozen. Such freezing ensures that executions don’t eat up CPU while benchmarking how many executions can Appwrite create.

After one minute, the k6 benchmarks are completed, and we unfreeze the proxy server, thus resuming the executions queue. We let all executions finish while tracking timing data on the proxy server.

The results of this benchmark were breathtaking! Appwrite 0.12 created 20700 executions, while version 0.13 created 20824 executions. A tiny improvement of only 0.6% was expected, as we did not refactor the creation process in this version. The stunning performance can be seen when comparing how long these functions took to execute. While Appwrite 0.12 ran at the rate of 360 executions per minute, with Appwrite 0.13, the rate was shocking 5820 executions per minute!

Appwrite Function Benchmarks

The second test we ran was using Appwrite in a real-world scenario to see how the average execution time was improved. For this test, we prepared the same script in six different runtimes. We used an example script of converting a phone number to the country code, as covered in the Open Runtimes examples GitHub repository. The script ran the following commands:

  1. Fetch country phone prefixes database from Appwrite Locale SDK
  2. Validate request payload
  3. Find a match in prefix
  4. Return country information

For benchmarking these functions, we used the same k6 technology, but this time in a simpler flow:

  1. Create function deployment and activate it
  2. Run the benchmark for 60 seconds
  3. Wait for executions to finish
  4. Calculate average execution time

We ran these scripts in six different runtimes (languages) in both 0.12 and 0.13 versions, and the results dropped our jaws! The most surprising result was in Dart, where we managed to run this script in less than one millisecond!

On that note, let’s start with Dart. Dart is a compiled language, meaning the result of a build is a binary code. This makes the execution extremely fast, as everything is ready for our server in zeros and ones. Due to bad support for compiled languages in 0.12, the average execution time of our function was 1895ms. Our expectations were pretty high when running the same script in 0.13, but an incredible drop to 0.98ms average execution time left us speechless for sure!

Dart Execution time comparison graph

Let’s continue with another commonly used language, NodeJS. The same function that took 325ms to execute in 0.12 only took 1.45ms in 0.13! This result was the most surprising for me, as I didn’t expect such great results from an interpreted language.

NodeJS Execution time comparison graph

To follow up NodeJS, we compared Deno, which had similar results averaging around 3ms in version 0.13. The gap was slightly smaller, as this function only took 145ms in 0.12.

Deno Execution time comparison graph

We continued the test with PHP, a well-known language running numerous websites on the internet. While the 0.12 average stood at 106ms, in 0.13, the average dropped to 7ms.

PHP Execution time comparison graph

We continued to run the test in Python and Ruby, both with similar results. In 0.12, Python took 254ms to execute, while Ruby averaged at 358ms. Believe it or not, in version 0.13, Python only took an incredible 11ms, with Ruby a little bit faster 9.5ms.

Python Execution time comparison graph

Ruby Execution time comparison graph

As you can see, the execution rate has significantly improved with this release, and we look forward to seeing how developers using Appwrite will utilize these new features.

đź’Ş Engineering Challenges

To allow for synchronous execution and prioritize speed, we decided to depart from the task-based system that most of our workers use and instead create a new component to Appwrite called the executor. The executor would handle all orchestration and execution responsibilities and remove the Docker socket from the functions worker. The executor is an HTTP Server built with Swoole and Utopia using various Appwrite libraries to interact with the database.

One of the initial challenges was creating an orchestration library that would allow us to easily switch away from Docker down the line if we wanted to. This change would enable us to use other orchestration tools like Kubernetes or Podman and allow our users to run Appwrite using their favorite orchestration tools. Currently, we have two adapters for this library, Docker CLI and Docker API; however, we plan to grow this selection as time progresses.

The next challenge was that Swoole, unfortunately, had a few bugs in their coroutine cURL hook, which we use to communicate with the Docker API version of the Orchestration library. These bugs forced us to use the Docker CLI adapter, which caused higher wait times and rate-limiting problems, which caused instability with higher loads.

We came up with a solution to introduce a queue-like system for utilizing Docker. This change would make bringing up runtimes slightly slower, but actual executions always use a cURL request, so execution times would not be negatively affected by this change.

🌅 Conclusion

With these improvements, we hope that Appwrite can deliver even more to help you build your dream application without sacrificing speed or flexibility. We all look forward to seeing what everyone builds with these new features and improvements to function execution speed.

We encourage you to join our Discord server, where you can keep up to date with the latest of Appwrite and get help if you need it from the very same people that develop Appwrite!

Top comments (1)

Collapse
 
devshirt_club profile image
devshirt.club

Based on reading this great article I think you would agree that serverless is high speed without compromising the deed, We have a t-shirt for that.
You can find it here: serverless web scale