I feel like it's been a looooong minute since I was last able to give an update on AssemblyLift. Real life is busy, summer heat makes me sluggish, and I finally took a vacation!
Still, in my spare time I've managed to finish core work around the next major revision of AssemblyLift. It took almost exactly twice as long as I had expected & planned, but I think it was worth the extra effort.
I spent a month thinking I was writing a plugin system, but it turned out to be fairly brittle and gave me trouble with respect to memory safety. So I spent another month re-working it into a solution based on Cap'n Proto RPC, and so far it seems to have paid off!
What are we talking about, anyway?
AssemblyLift is a framework I've been developing, which provides code & tooling for building serverless cloud applications running on services like AWS Lambda. AssemblyLift apps are written in Rust (more client languages to come!) and are compiled to WebAssembly (WASM).
Running serverless functions as WebAssembly comes with several benefits:
- WebAssembly modules are inherently isolated; they run inside their own memory space, and are by default unable to open sockets or access the underlying filesystem.
- They are often faster than their counterparts written in JavaScript or Python, both common language choices for Lambda.
- They can be much lighter in weight, in terms of memory footprint and size of deployment package.
- Several languages compile to WebAssembly, offering developer choice on a common runtime environment.
On its own, that first point might look like a problem -- we're probably going to need to communicate with the outside world at some point. Luckily, WebAssembly allows the implementer to provide its own system ABI to the module code. AssemblyLift provides its own ABI, which facilitates running our modules in places like AWS Lambda.
Rather than trying to provide low-level APIs to the module (such as POSIX sockets for example), AssemblyLift provides a standard interface for registering plugin-like modules which the WASM module may access. Tasks which require network or storage access are implemented in these plugin modules, which the WASM module communicates with via the runtime host.
In the AssemblyLift framework, these plugin modules are referred to as IO modules (which I often abbreviate to IOmod). The systems' design & implementation were inspired by Haskell's approach to IO, hence the name ๐.
In the v0.1 series of AssemblyLift, the IOmod portion of things is roughly present however the (currently lone) DynamoDB module is compiled statically into the rest of the host binary (essentially as a proof-of-concept). My goal was to allow IOmods to be distributed as packages, which naturally means that they must be able to be loaded (or not) at run-time on a per-function basis.
Neat-O! So how does it all work? What does it look like?
At a high level, it looks a little bit like the following diagram; I did my best to understand & stick to the C4 model.
Obviously we're glossing over some implementation details, but the basic idea really is as simple as it looks. To dive a little deeper, let's look at an AssemblyLift example and walk through how a simple network call is made ๐.
extern crate asml_awslambda;
use direct_executor;
use asml_core::GuestCore;
use asml_awslambda::{*, AwsLambdaClient, LambdaContext};
use asml_iomod_dynamodb::{structs, list_tables};
handler!(context: LambdaContext, async {
let input: structs::ListTablesInput = Default::default();
let response = list_tables(input).await; // this is our IOmod call
AwsLambdaClient::success(format!("Got response {:?}", response));
});
The above example uses a DynamoDB IOmod to call ListTables. This method is implemented by list_tables
, which takes a single input
argument.
Every IOmod call has this structure (ie output_struct = call(input_struct)
), and calls are always asynchronous. The list_tables
function comes from an IOmod "guest crate" -- a Rust crate providing the WASM-compatible guest interface to the IO module.
When calling list_tables
, the guest calls the AssemblyLift ABI to invoke the method on its behalf. The AssemblyLift runtime in turn locates and invokes the method via RPC in the IOmod host process.
Packaging & Deployment
In the current implementation, IOmods are defined per-service in an AssemblyLift app. When deployed to AWS Lambda, each service is backed by a Lambda Layer which contains all of the IOmod binaries required by the service's functions. When the AssemblyLift runtime starts, it spawns all binaries that have been layered into the environment.
When will it be ready?
I'm aiming for early October, by the time I clean everything up and do more testing. This IOmod stuff also doesn't include everything I had planned for v0.2.x, but I may push everything else out to the next major release. We'll see -- stay tuned! ๐
Update Nov 8 2020: IOmods are now available in the v0.2.x line of AssemblyLift
Top comments (0)