DEV Community

Pat Cody
Pat Cody

Posted on

A Year in Review: Building a Better Serverless Platform

Velocity 9: A Serverless Platform for the Masses

As part of our graduation requirements for CS at the George Washington University, we have to work on a final project for the entire year. My team of myself (@pcodes), Gregor Peach (@Others), and Henry Jaensch (@hjaensch7) realized from personal experience how challenging creating a serverless application can be, and we set out to change that. Velocity 9 is an easy-to-use serverless platform that integrates directly with Github, making it a breeze to deploy your serverless application.

What is Serverless Computing?

Serverless computing is the idea that you can write functions that you then upload directly to a cloud hosting provider and run, without needing any kind of web server. This makes it so much easier to write a web back-end, as each function you write gets its own URL that triggers it to run. Gone are the days of needing to configure complex servers, as the serverless platform handles all of the needed auto-scaling and execution, so you only ever have to worry about writing code. An easy example is an Amazon Alexa skill- every time you trigger a particular skill, it executes a corresponding serverless function running somewhere, such as Amazon's AWS Lambda (their serverless platform).

I mentioned challenging, but how? This sounds so much easier. The two main issues with serverless platforms today are ease-of-use and performance. Even though I might not have to configure any servers, the Amazon AWS Lambda configuration replaces that with its own set of complex configuration instead. AWS tries to appeal to everyone, including enterprise customers, and that means the security model is far more complex than what a typical student or single developer will care about. Other companies might offer an easier experience, but don't have the performance necessary to be a production-grade option. The largest problem with serverless applications is what is known as the "cold start problem", or how long it takes for your application to start running. A typical application is always running, but a benefit of the serverless paradigm is that your function only needs to run (and cost you money) when it is actually being used. However, if it takes 3 seconds for your request to be processed by the serverless platform, that's 3 seconds of waiting to log in to a website. That's a lot of time and we think we can do better.

Enter: Velocity 9

Our project addresses the two main pain-points typically associated with serverless platforms, ease-of-use, and performance. We make the serverless app deployment process seamless by integrating with Github, so that anytime a repo's master branch is updated, the serverless app is too. The only work you have to do is give Velocity 9 access to your repository (which you can handle through Github!) and press "start" from our dashboard, which kicks off the initial deployment. After that, you only ever need to visit the dashboard to view function statistics, such as hits over a certain time period. No further configuration is required. The second problem, performance, we solved by investigating novel solutions with both Docker and an experimental research project being developed at GW. As a result, we were able to create an execution environment for serverless applications that is both secure and fast.

Demo

Check out this YouTube video demonstrating how Velocity 9 works!
(video pending)

Check out the Code!

Velocity 9 is a distributed system, meaning it is actually composed of several smaller applications that work together to provide the cohesive "Velocity 9" experience. As a result, we have a full Github organization that holds each part. Check it out here!

Velocity 9 Architecture

As I mentioned in the previous section, Velocity 9 is a distributed system, composed of five smaller programs. I'll talk a little bit about each part here, and how they relate together.

Architecture Diagram

The Deployment Manager

The Velocity 9 Deployment Manager is responsible for taking code pushed to Github and deploying it to a Velocity 9 Worker, so that it can be run. The deployment manager is written in Go, and uses Github webhooks to detect when someone pushes new code to their repo. When this happens, it downloads the repo and bundles it into an archive, which it then sends to one of potentially many V9 Worker nodes. For clarification, we refer to a deployed serverless application as a "serverless component". The deployment manager also offers experimental support for auto-scaling, so that if one particular serverless component becomes overworked, a second instance of the component can be deployed to a different worker node.

The Worker Node

The Velocity 9 Worker Node is what actually runs someone's serverless component. It is written in Rust and is capable of running multiple serverless components at a time, all of which are isolated from each other. This isolation is achieved by running each component as its own Docker container, meaning that Velocity 9 can also hypothetically support any language capable of running inside Docker. I say "hypothetical" because there is some library code that needs to be bundled with a serverless component, and the library only supports Python at the moment. Future work! A more experimental feature of the workers is that they also support isolation via WebAssembly, due to research in Dr. Gabriel Parmer's lab at GW. Execution via WebAssembly is far more efficient than Docker, meaning the cold start time is much faster.

Router

In order to access the serverless components running on different worker nodes, we need a program that knows which component is running on which node, and can forward the requests to the appropriate location. This is where the Velocity 9 Router comes into play. Also written in Rust, the router parses incoming REST requests and decides which worker node it should send to. It then forwards whatever response the serverless component sends to the router back to the client. The router load-balances by sending requests in a round-robin fashion to each of the worker nodes running the requested serverless component.

Web Interface

To interact with your deployed components, Velocity 9 has a web interface that uses NodeJS for its back-end, and React for its front-end. The component dashboard allows for pausing and resuming running components, and will display the current status of each component. The web interface also allows you to see graphs statistics about your components, as well as output logs. Authentication is handled via OAuth with Github, as V9 already requires you to have a Github account, so we didn't think it made sense to require an additional username and password.

The Database

The final piece to this serverless puzzle is the database for storing the state of the overall system. It tracks all components registered by users, the status of those components, as well as the running statistics as people access the components. The database uses PostgreSQL.

Lessons Learned

  • Distributed systems are hard to debug! When you have so many moving pieces, it can be tough tracking down exactly where a bug is.
  • All of us got to learn a new language (Go, Rust, JS, etc.) which was exciting, as they all have some really cool features
  • Time management with projects is a challenge. As this was a year-long project, we tried to estimate the work we would be able to accomplish over a variety of time periods (e.g. the whole year, fall semester, winter break, spring semester) and we were usually wrong. It can be tough balancing what we want to implement with what we actually are capable of.

Overall, this project was an amazing opportunity to work on something cool for a whole year, and I'm really proud of what we accomplished. We successfully built a platform that not only works, but works well. We actually were able to host an Alexa skill on it (but that's for another blog post)! Henry and Gregor, thanks for being such great teammates!

Top comments (1)

Collapse
 
latrare profile image
Trevor Miranda
"Distributed systems are hard to debug! When you have so many moving pieces, it can be tough tracking down exactly where a bug is."

Leslie Lamport has you covered :)
Talk on TLA+