DEV Community

Cover image for MoonZoon Dev News (4): Actix, Async CLI, Error handling, Wasm-pack installer
Martin Kavík
Martin Kavík

Posted on

MoonZoon Dev News (4): Actix, Async CLI, Error handling, Wasm-pack installer

Unlimited Actix power!

Hello from Actix


Welcome to the MoonZoon Dev News!

MoonZoon is a Rust full-stack framework. If you want to read about new MZ features, architecture and interesting problems & solutions - Dev News is the right place.


Chapters


News

  • Moon - Warp replaced with Actix. There are API changes to allow you to use Actix directly from your apps.

  • mzoon - Rewritten with Tokio, implemented --open parameter and wasm-pack installer.

  • The entire MoonZoon codebase should be clean enough now. Comments are still missing and there should be more tests but if you wanted to know how it really works, you don't have to be afraid to read the code in the MZ repo.

  • You can select the required mzoon version for heroku-buildpack-moonzoon by adding the file mzoon_commit to your repo with a MZ project.

You'll read about Moon and mzoon improvements mentioned above in the following chapters.


And I would like to thank:

  • All Rust libraries maintainers. It's tough work but it allows us to write clean code and amazing products.

Actix


Why Actix?

  • It's fast, async and popular.
  • Supports HTTP/2 and probably also H3 in the future (related issue).
  • Actix actor framework could be a good foundation for the first version of virtual actors.
  • It uses Tokio under the hood. It's the most popular async runtime and we can use it also in mzoon.
  • The API feels more intuitive than the Warp's one to me. And we were fighting with Warp during the Moon development.
  • Tide supports only HTTP/1.x.
  • Why not Rocket (detailed explanation to answer questions on the MZ chat):
    • It's too much opinionated with too many batteries included to be used as just a library. Moon would fight with Rocket. Examples: We would need to disable Rocket's console logging, then explain to users they can't use Rocket.toml and hope that Rocket's live reloading won't break Moon's file watchers, etc.
    • It's still less popular / downloaded than Actix.
    • We would need to find a compatible actor framework to write PoC of virtual actors.
    • I've already rewritten Moon to Actix (before Rocket published the version 0.5.0-rc.1). Actix works great so I don't see a real reason to sacrifice dozens hours of my free time and slow down Moon development by another rewriting.
    • Every framework has its own problems - there is a chance we would encounter a show-stopper during the Rocket integration.
    • It doesn't really matter what framework we choose from the long-term view. The number of reasons why you want to communicate directly with the lower-level framework (Actix/Rocket) in Moon will decrease proportionally to the progress of Moon API development.
    • Rocket and Actix (and other frameworks) have pretty similar performance and API in many cases so all Rust web developers should be able to learn the Actix API quickly and even migrate a project from other frameworks in a reasonable time.

Moon API changes

The simplest Moon app:

use moon::*;

async fn frontend() -> Frontend {
    Frontend::new().title("Actix example")
}

async fn up_msg_handler(_: UpMsgRequest) {}

#[moon::main]
async fn main() -> std::io::Result<()> {
    start(frontend, up_msg_handler, |_|{}).await
}
Enter fullscreen mode Exit fullscreen mode
  • main is now async so we no longer need the init function - you can write your async code directly to the main's body.

  • start! macro has been rewritten to a simple function start. The interesting is the third argument. See the next example:

use moon::*;
use moon::actix_web::{get, Responder};

async fn frontend() -> Frontend {
    Frontend::new().title("Actix example")
}

async fn up_msg_handler(_: UpMsgRequest) {}

#[get("hello")]
async fn hello() -> impl Responder {
    "Hello!"
}

#[moon::main]
async fn main() -> std::io::Result<()> {
    start(frontend, up_msg_handler, |cfg|{
        cfg.service(hello);
    }).await
}
Enter fullscreen mode Exit fullscreen mode
  • It's the code used in the GIF at the top.

  • cfg in the example is actix_web::web::ServiceConfig. It allows you to create custom Actix endpoints and configure the server as you wish.

  • Multiple crates and items are reexported from moon to mitigate dependency hell caused by incompatible versions and to simplify your Cargo.toml. The current list looks like this:

pub use trait_set::trait_set;
pub use actix_files;
pub use actix_http;
pub use actix_web;
pub use actix_web::main;
pub use futures;
pub use mime;
pub use mime_guess;
pub use parking_lot;
pub use serde;
pub use tokio;
pub use tokio_stream;
pub use uuid;
Enter fullscreen mode Exit fullscreen mode

MoonZoon.toml changes

port = 8080
# port = 8443
https = false
cache_busting = true
backend_log_level = "warn" # "error" / "warn" / "info" / "debug" / "trace"

[redirect]
port = 8081
enabled = false

[watch]
frontend = [
    "frontend/Cargo.toml",
    "frontend/src",
]
backend = [
    "backend/Cargo.toml",
    "backend/src",
]
Enter fullscreen mode Exit fullscreen mode
  • There is a new property backend_log_level. It sets the env_logger log level.

    • info level is useful for debugging because it shows all requests (demonstrated in the GIF at the top).
    • Note: There are also independent 404 and 500 error handlers that call eprintl with the error before they pass the response to the client.
    • Note: fern looks like a good alternative if we find out env_logger isn't good enough. (Thanks azzamsa for the suggestion.)
  • [redirect_server] has been renamed to [redirect] because there is no longer a redirection server. The new RedirectMiddleware is activated when you enable the redirect.

Caching has been also improved:

  • cache_busting = true:
    • mzoon generates files like frontend_bg_[uuid].wasm, where uuid is a frontend build id with the type u128.
    • Moon serves the files with the header CacheControl set to MaxAge(31536000) (1 year).
  • cache_busting = false
    • mzoon doesn't change the file names at all - e.g. frontend_bg.wasm.
    • Moon serves the files with the header ETag with a strong etag set to the frontend build id. (See MDN ETag docs for more info.)

Server-Sent Events

Actix unfortunately doesn't have an official SSE API so I've decided to write a custom one. The current implementation is in the file crates/moon/src/sse.rs.

  • It sends a ping to all connections every 10 seconds to recognize the disconnected ones.

  • Integration:

    1. let sse = SSE::start();
    2. App::new().app_data(sse.clone())

Moon's SSE connector:

async fn sse_responder(
    sse: web::Data<Mutex<SSE>>,
    shared_data: web::Data<SharedData>,
) -> impl Responder {
    let (connection, event_stream) = sse.new_connection();
    let backend_build_id = shared_data.backend_build_id.to_string();

    if connection
        .send("backend_build_id", &backend_build_id)
        .is_err()
    {
        return HttpResponse::InternalServerError()
            .reason("sending backend_build_id failed")
            .finish();
    }

    HttpResponse::Ok()
        .insert_header(ContentType(mime::TEXT_EVENT_STREAM))
        .streaming(event_stream)
}
Enter fullscreen mode Exit fullscreen mode

and the frontend reloader:

async fn reload_responder(sse: web::Data<Mutex<SSE>>) -> impl Responder {
    let _ = sse.broadcast("reload", "");
    HttpResponse::Ok()
}
Enter fullscreen mode Exit fullscreen mode

Warning: Keep in mind that browsers can open only 6 SSE connections over HTTP/1.x to the same domain. It means when you open multiple browser tabs pointing to http://localhost, you may observe infinite loadings or similar problems. The limit for HTTP/2 is 100 connections by default, but can be negotiated between the client and the server.

Moon endpoint changes

App::new()
    // ...
    .service(Files::new("_api/public", "public"))
    .service(
        web::scope("_api")
            .route(
                "up_msg_handler",
                web::post().to(up_msg_handler_responder::<UPH, UPHO>),
            )
            .route("reload", web::post().to(reload_responder))
            .route("pkg/{file:.*}", web::get().to(pkg_responder))
            .route("sse", web::get().to(sse_responder))
            .route("ping", web::to(|| async { "pong" })),
    )
    .route("*", web::get().to(frontend_responder::<FRB, FRBO>))
Enter fullscreen mode Exit fullscreen mode

All backend endpoints are prefixed with _api to prevent conflicts with frontend routes. There are other solutions like hash routing or moving the frontend endpoint to another domain or a prefix for frontend urls but these solutions often lead to many unpredictable problems. Let's keep it simple.

There is a new simple endpoint ping. It's useful for testing if the server is alive. I can imagine we can also implement a heartbeat later (Moon would call a predefined endpoint in a configured interval).


mzoon


Async runtime

mzoon was rewritten with Tokio. The main goal was to remove spaghetti code and boilerplate caused by manual handling of threads, channels and signals. The secondary goal was error handling and improved performance.

There are also other async runtimes like async-std or smol but I've decided to choose the most battle-tested and popular one. Another reason for Tokio is Actix, because Actix is based on Tokio so there should be less context switching during the MoonZoon development.

Error handling

During the mzoon refactor, I've decided to integrate two nice libraries to eliminate boilerplate:

The first one is anyhow. It allows you to write ? wherever you want to return an error early. No need to write error mappers or similar stuff.

anyhow also provides the method context (and its lazy version with_context) and a macro anyhow! for creating errors. An example:

// `anyhow::Result<T>` is an alias 
// for a standard `Result<T, anyhow::Error>`
use anyhow::{anyhow, Context, Result};  

pub async fn build_backend(release: bool, https: bool) -> Result<()> {
    ...
    Command::new("cargo")
        .args(&args)
        .status()
        .await
        .context("Failed to get frontend build status")?
        .success()
        .err(anyhow!("Failed to build backend"))?;
    ...
}
Enter fullscreen mode Exit fullscreen mode
  • Notes:
    • The method .err from the example above is implemented in the crate bool_ext.
    • anyhow is useful mostly for apps. If you are writing a library, look at thiserror (written by the same author).

The second error handling library is fehler. I've decided to integrate it into mzoon once I noticed that many functions were returning Ok(()) and their signature was ... -> Result<()>. Ok(()) is a side-effect of anyhow because you want to use ? as much as possible to automatically convert concrete errors to anyhow::Error. The second reason why there were many Ok(())s is the fact that mzoon does many file operations.

I recommend to read these articles about fehler - A brief apology of Ok-Wrapping and From failure to Fehler.

So when we combine both libraries, we can write a clean code without boilerplate:

use anyhow::Error;
use fehler::throws;
// ...
#[throws]
#[tokio::main]
async fn main() {
    // ...
    match opt {
        Opt::New { project_name, here } => command::new(project_name, here).await?,
        Opt::Start { release, open } => command::start(release, open).await?,
        Opt::Build { release } => command::build(release).await?,
    }
}
Enter fullscreen mode Exit fullscreen mode
  • #[throws] automatically converts the return type from -> () to -> Result<(), Error> and you don't have to write ugly Ok(()) or wrap the entire match into Ok().

  • All errors before ? are automatically converted to Error and nicely written to the terminal with their contexts thanks to anyhow.

Let's look at another example from mzoon where we integrated the crate apply to help with chaining:

// ...
use anyhow::{Context, Error};
use apply::{Also, Apply};
use fehler::throws;

#[throws]
pub fn run_backend(release: bool) -> Child {
    println!("Run backend");
    MetadataCommand::new()
        .no_deps()
        .exec()?
        .target_directory
        .also(|directory| directory.push(if release { "release" } else { "debug" }))
        .also(|directory| directory.push("backend"))
        .apply(Command::new)
        .spawn()
        .context("Failed to run backend")?
}
Enter fullscreen mode Exit fullscreen mode
  • Tip: Don't try to write "functional chains" at all costs. It's easy to get lost in long chains, they may be difficult to change and they may increase cognitive load because the reader has to keep intermediate steps/states in his working memory. The example above is very close to the case where clean code is uncomfortable to read.

  • Note: We have to find the target directory and call the Moon app binary (backend) manually because cargo run always tries to build the project even if the project has been already built. It slows down the build pipeline and writes unnecessary messages to the terminal. Related issue.

File Watchers

While I was rewriting std channels to the tokio ones, I encountered the problem with the notify API. Also its event debouncing wasn't working properly in mzoon. Fortunately notify maintainers are working on a new major version and they've already published 5.0.0-pre.x versions. The API is more flexible but debouncing is still missing in the new notify and in the crate futures-rs. So I had to write a custom debouncer.

The snippets below belong to the current ProjectWatcher implementation in /crates/mzoon/src/watcher/project_watcher.rs.

use notify::{immediate_watcher, Event, RecommendedWatcher, RecursiveMode, Watcher};
use tokio::sync::mpsc::{self, UnboundedReceiver, UnboundedSender};
// ...

pub struct ProjectWatcher {
    watcher: RecommendedWatcher,
    debouncer: JoinHandle<()>,
}

impl ProjectWatcher {
    #[throws]
    pub fn start(paths: &[String], debounce_time: Duration) -> (Self, UnboundedReceiver<()>) {
        let (sender, receiver) = mpsc::unbounded_channel();
        let watcher = start_immediate_watcher(sender, paths)?;
        let (debounced_sender, debounced_receiver) = mpsc::unbounded_channel();

        let this = ProjectWatcher {
            watcher,
            debouncer: spawn(debounced_on_change(
                debounced_sender,
                receiver,
                debounce_time,
            )),
        };
        (this, debounced_receiver)
    }
    // ...
Enter fullscreen mode Exit fullscreen mode
  1. ProjectWatcher is a general watcher based on the notify's watcher. It's used in mzoon's BackendWatcher and FrontendWatcher.

  2. start_immediate_watcher calls the notify's immediate_watcher function to register watched paths and the callback that is invoked when notify observes a file change. The callback sends () (aka unit) through the sender.

  3. The sender's second half - receiver - is passed to the debouncer. It means the debouncer is able to listen for all registered file system events.

  4. The debounced_sender represents the debouncer's output - basically a stream of debounced units (we can replace units with Events if needed in the future).

async fn debounced_on_change(
    debounced_sender: UnboundedSender<()>,
    mut receiver: UnboundedReceiver<()>,
    debounce_time: Duration,
) {
    let mut debounce_task = None::<JoinHandle<()>>;
    let debounced_sender = Arc::new(debounced_sender);

    while receiver.recv().await.is_some() {
        if let Some(debounce_task) = debounce_task {
            debounce_task.abort();
        }
        debounce_task = Some(spawn(debounce(
            Arc::clone(&debounced_sender),
            debounce_time,
        )));
    }

    if let Some(debounce_task) = debounce_task {
        debounce_task.abort();
    }
}

async fn debounce(debounced_sender: Arc<UnboundedSender<()>>, debounce_time: Duration) {
    sleep(debounce_time).await;
    if let Err(error) = debounced_sender.send(()) {
        return eprintln!("Failed to send with the debounced sender: {:#?}", error);
    }
}
Enter fullscreen mode Exit fullscreen mode
  1. When the unit from the notify's callback is received, then a new task is spawned. The task sleeps for the debounce_time and then a unit is sent through the debounced_sender.

  2. When another unit is received, then the sleeping task is aborted and a new one is created. You can understand it as "debounce time reset".

Notice two same code blocks in the previous snippet:

if let Some(debounce_task) = debounce_task {
    debounce_task.abort();
}
Enter fullscreen mode Exit fullscreen mode

The first usage "resets debounce time", but the second one is basically an alternative to drop. Unfortunately neither Rust nor tokio is able to automatically clean all garbage so we have to do it manually - the task handle does nothing when dropped in most cases.

So... how we can stop the watcher?

The ProjectWatcher doesn't have only one method (start) - there is another one:

#[throws]
pub async fn stop(self) {
    let watcher = self.watcher;
    drop(watcher);
    self.debouncer.await?;
}
Enter fullscreen mode Exit fullscreen mode
  1. Drop notify's RecommendedWatcher.
  2. Dropped watcher means that also our sender has been dropped because it was closed by the closure used as a callback / event handler owned by the watcher.
  3. When the sender is dropped, then receiver.recv().await.is_some() returns false to break the while loop in the debouncer.
  4. The debounce task is aborted if there was one running.

Yeah, it's already quite complicated and error prone but we haven't finished yet.

FrontendWatcher and BackendWatcher have the similar relationship to ProjectWatcher as ProjectWatcher to notify's Watcher. Let's look at the FrontendWatcher skeleton:

pub struct FrontendWatcher {
    watcher: ProjectWatcher,
    task: JoinHandle<Result<()>>,
}

impl FrontendWatcher {
    #[throws]
    pub async fn start(config: &Config, release: bool, debounce_time: Duration) -> Self {
        let (watcher, debounced_receiver) =
            ProjectWatcher::start(&config.watch.frontend, debounce_time)
                .context("Failed to start the frontend project watcher")?;
        // ...        
        Self {
            watcher,
            task: spawn(on_change(
                debounced_receiver,
                // ...
            )),
        }
    }

    #[throws]
    pub async fn stop(self) {
        self.watcher.stop().await?;
        self.task.await??;
    }
}
Enter fullscreen mode Exit fullscreen mode

As you can see, there is another stop method that calls the previous stop method and the remaining code is very similar to the ProjectWatcher implementation.

Let's look at the last snippet to know the whole watcher story (/crates/mzoon/src/command/start.rs):

#[throws]
pub async fn start(release: bool, open: bool) {
    // ...
    let frontend_watcher = build_and_watch_frontend(&config, release).await?;
    let backend_watcher = build_run_and_watch_backend(&config, release, open).await?;

    signal::ctrl_c().await?;
    println!("Stopping watchers...");
    let _ = join!(
        frontend_watcher.stop(),
        backend_watcher.stop(),
    );
    println!("Watchers stopped");
}

#[throws]
async fn build_and_watch_frontend(config: &Config, release: bool) -> FrontendWatcher {
    if let Err(error) = build_frontend(release, config.cache_busting).await {
        eprintln!("{}", error);
    }
    FrontendWatcher::start(&config, release, DEBOUNCE_TIME).await?
}
Enter fullscreen mode Exit fullscreen mode

So I can imagine there are some opportunities for another refactor round:

  • "Hide" loops and debouncer inside Streams.
  • Use notify's debouncer once it's integrated into the library.
  • Use async drops once Rust supports them or an alternative.
  • If you want to investigate the option "Wait until all task done" so we can just abort all tasks in a standard drop and then wait for async runtime to finish, there is the entrance to the rabbit hole.

Feel free to create a PR when you manage to simplify the code.

File Compressors

Frontend files are served compressed to get them quickly to users and to reduce network traffic and server load. Only app files (in the pkg directory) are compressed at the moment but we'll probably compress the entire public folder in the future.

mzoon compresses files when the app has been built in the release mode. The result is three files instead of one: file.xxx (the original), file.xxx.gz and file.xxx.br. Then Moon serves them according to the ACCEPT_ENCODING header sent by clients.

We would use only Brotli algorithm because it produces the smallest files but Firefox supports only Gzip over HTTP. All browsers support Brotli with HTTPS.

Note: If we decide to compress non-cacheable dynamic content - like messages between frontend and backend - then we will probably choose Gzip because it's faster than Brotli.

Let's look at the implementation. The first snippet is from /crates/mzoon/src/helper/file_compressor.rs:

use crate::helper::ReadToVec;
use async_trait::async_trait;
use brotli::{enc::backward_references::BrotliEncoderParams, CompressorReader as BrotliEncoder};
use flate2::{bufread::GzEncoder, Compression as GzCompression};
// ...

#[async_trait]
pub trait FileCompressor {
    async fn compress_file(content: Arc<Vec<u8>>, path: &Path, extension: &str) -> Result<()> {
        let path = compressed_file_path(path, extension);
        let mut file_writer = fs::File::create(&path)
            .await
            .with_context(|| format!("Failed to create the file {:#?}", path))?;

        let compressed_content = spawn_blocking(move || Self::compress(&content)).await??;

        file_writer.write_all(&compressed_content).await?;
        file_writer.flush().await?;
        Ok(())
    }

    fn compress(bytes: &[u8]) -> Result<Vec<u8>>;
}
//...
// ------ Brotli ------

pub struct BrotliFileCompressor;

#[async_trait]
impl FileCompressor for BrotliFileCompressor {
    fn compress(bytes: &[u8]) -> Result<Vec<u8>> {
        BrotliEncoder::with_params(bytes, 0, &BrotliEncoderParams::default()).read_to_vec()
    }
}

// ------ Gzip ------

pub struct GzipFileCompressor;

#[async_trait]
impl FileCompressor for GzipFileCompressor {
    fn compress(bytes: &[u8]) -> Result<Vec<u8>> {
        GzEncoder::new(bytes, GzCompression::best()).read_to_vec()
    }
}
Enter fullscreen mode Exit fullscreen mode
  • #[async_trait] allows us to write async methods in traits. (The crate async_trait, from the author of anyhow and thiserror.)

  • The combination of async-trait and fehler is deadly for the Rust compiler. That's why you see Ok(()) + Result<()> instead of #[throws]. I'm not sure if it's async-trait or fehler problem, feel free to investigate it more and let me know.

  • We need to call spawn_blocking instead of spawn to move compression to a new thread because both encoders / compressors are blocking. I was trying to use async-compression, but there was a bug probably somewhere close to the GzEncoder - the MZ example counter was producing a wasm file that had always only 9KB instead of 16KB. Also I had to use async-compression's futures encoders with the compat layer to resolve the problem with incompatible tokio versions. Feel free to investigate it more and let me know.

  • Tip: Don't forget to call .flush() after .write_all(). Sometimes it works without .flush(), sometimes it doesn't, so it's difficult to debug.

  • read_to_vec is a custom helper - see /crates/mzoon/src/helper/read_to_vec.rs.

  • Both encoders are set to compress in the best quality (i.e. to produce the smallest files at the cost of speed).

The second and the last snippet is from /crates/mzoon/src/build_frontend.rs:

use futures::TryStreamExt;

#[throws]
async fn compress_pkg(wasm_file_path: impl AsRef<Path>, js_file_path: impl AsRef<Path>) {
    try_join!(
        create_compressed_files(wasm_file_path),
        create_compressed_files(js_file_path),
        visit_files("frontend/pkg/snippets")
            .try_for_each_concurrent(None, |file| create_compressed_files(file.path()))
    )?
}

#[throws]
async fn create_compressed_files(file_path: impl AsRef<Path>) {
    let file_path = file_path.as_ref();
    let content = Arc::new(fs::File::open(&file_path).await?.read_to_vec().await?);

    try_join!(
        BrotliFileCompressor::compress_file(Arc::clone(&content), file_path, "br"),
        GzipFileCompressor::compress_file(content, file_path, "gz"),
    )
    .with_context(|| format!("Failed to create compressed files for {:#?}", file_path))?
}
Enter fullscreen mode Exit fullscreen mode
  • All files are compressed and generated in parallel thanks to spawn_blocking (explained before) and thanks to tokio::fs (we don't block the working thread by waiting for OS file operations).

  • visit_files is a stream of files (explained in the next section). It works nice with the function try_for_each_concurrent.

File Visitor

When you want to iterate over all files in the given directory and its nested folders, then it's relatively straightforward with the standard Rust library. Just go to the Rust docs for std::fs::read_dir and copy the provided example. Also there is chance we'll see the function fs::read_dir_all in std. Or you can use the crate walkdir from a very experienced maintainer of many Rust libraries.

However the Rust async world is still pretty new and messy. If I chose smol instead of tokio and was brave enough to use the library with only 602 downloads, then I would probably integrate the crate async_walkdir.

Another approach would be to use walkdir to create a list of files and then process the list as needed in parallel. However it doesn't sound as a clean solution and in the case of a large directory tree, you want to return early when the processing fails or when your file search is complete.

I'm not a big fan or recursive functions because:

  • They often lead to increased cognitive load.
  • Stack overflow is difficult to catch and debug.
  • Rust doesn't have a good support for TCO/TCE (tail call optimization / elimination), although there are some libraries like Tailcall and maybe promising news in rust-lang/rfcs.
  • You often need to use Box in Rust recursive constructs (both functions and types need boxed items). The crate async-recursion basically just wraps the Future into a Box.
  • Why does NASA not allow recursion?

Fortunately during intensive reading and searching for a better solution, I've found a nice answer on stackoverflow.com compatible with tokio and futures. I've refactored it a little bit and saved to /crates/mzoon/src/helper/visit_files.rs. The code:

pub fn visit_files(path: impl Into<PathBuf>) -> impl Stream<Item = Result<DirEntry>> + Send + 'static {
    #[throws]
    async fn one_level(path: PathBuf, to_visit: &mut Vec<PathBuf>) -> Vec<DirEntry> {
        let mut dir = fs::read_dir(path).await?;
        let mut files = Vec::new();

        while let Some(child) = dir.next_entry().await? {
            if child.metadata().await?.is_dir() {
                to_visit.push(child.path());
            } else {
                files.push(child)
            }
        }
        files
    }

    stream::unfold(vec![path.into()], |mut to_visit| {
        async {
            let path = to_visit.pop()?;
            let file_stream = match one_level(path, &mut to_visit).await {
                Ok(files) => stream::iter(files).map(Ok).left_stream(),
                Err(error) => stream::once(async { Err(error) }).right_stream(),
            };
            Some((file_stream, to_visit))
        }
    })
    .flatten()
}
Enter fullscreen mode Exit fullscreen mode

(Let me know if you know a better solution or a suitable library.)

Wasm-pack installer

I hate complicated installations and configurations, especially if they aren't cross-platform. In an ideal world, we would just write cargo install mzoon, hit enter and done. Unfortunately it isn't so simple even in the Rust + Cargo world.

The Rust compiler is pretty slow so if there is a chance to avoid compilation, we should use it. It applies especially for CI pipelines. So we have to download pre-compiled binaries. But to use binaries, we firstly have to answer these questions:

  1. What are available binary versions / supported platforms?
  2. What is our platform?
  3. Where we should download wasm-pack?
  4. How should we download and unpack wasm-pack?

--

1) What are available binary versions / supported platforms?

wasm-pack's repo has associated build pipelines for multiple platforms. So we can just look at the release assets.

The current list (version 0.9.1):

  • wasm-pack-init.exe (7.16 MB)
  • wasm-pack-v0.9.1-x86_64-apple-darwin.tar.gz (2.97 MB)
  • wasm-pack-v0.9.1-x86_64-pc-windows-msvc.tar.gz (2.69 MB)
  • wasm-pack-v0.9.1-x86_64-unknown-linux-musl.tar.gz (5.02 MB)

Note: wasm-pack-init.exe is actually an uncompressed Windows binary with a different name.

--

2) What is our platform?

There are multiple ways to determine the platform. Two of them are used in mzoon:

There is a build script build.rs in the mzoon crate with this code:

fn main() {
    println!(
        "cargo:rustc-env=TARGET={}",
        std::env::var("TARGET").unwrap()
    );
}
Enter fullscreen mode Exit fullscreen mode

The only purpose is to "forward" the environment variable TARGET (available only during the build process) to the compilation. Then we can read it in the mzoon code (/crates/mzoon/src/wasm_pack.rs):

const TARGET: &str = env!("TARGET");
Enter fullscreen mode Exit fullscreen mode

Unfortunately we can't use it directly because there are cases where it's too strict. For example, Heroku build pipeline is identified as x86_64-unknown-linux-gnu but we have the binary only for x86_64-unknown-linux-musl. However the available binary works even in that Heroku pipeline. So we need more relaxed platform matching in practice:

cfg_if! {
    if #[cfg(target_os = "macos")] {
        const NEAREST_TARGET: &str = "x86_64-apple-darwin";
    } else if #[cfg(target_os = "windows")] {
        const NEAREST_TARGET: &str = "x86_64-pc-windows-msvc";
    } else if #[cfg(target_os = "linux")] {
        const NEAREST_TARGET: &str = "x86_64-unknown-linux-musl";
    } else {
        compile_error!("wasm-pack pre-compiled binary hasn't been found for the target platform '{}'", TARGET);
    }
}
Enter fullscreen mode Exit fullscreen mode
  • Note: The macro cfg_if belongs to the crate cfg_if.

In the code above I assume mzoon will be compiled only on the most common platforms. When someone wants to compile mzoon on other platforms, we will need to add a fallback to cargo install. There is also a small check to inform you that you may have incompatible platform:

if TARGET != NEAREST_TARGET {
    println!(
        "Pre-compiled wasm-pack binary '{}' will be used for the target platform '{}'",
        NEAREST_TARGET, TARGET
    );
}
Enter fullscreen mode Exit fullscreen mode

The example output from the Heroku build log:

Building frontend...

Installing wasm-pack...

Pre-compiled wasm-pack binary 'x86_64-unknown-linux-musl' will be used for the target platform 'x86_64-unknown-linux-gnu'

wasm-pack installed
Enter fullscreen mode Exit fullscreen mode

--

3) Where we should download wasm-pack?

wasm-pack contains a self-installer, triggered when the executable name starts with "wasm-pack-init".
The self-installer copy itself next to the rustup executable to make sure it's in PATH.

  • It isn't a bad idea but there will be problems when multiple MZ projects will need different wasm-pack versions.

wasm-pack uses the crate binary-install to install its binary dependencies like wasm-bindgen or wasm-opt. Those binaries are saved into an OS-specific global cache folder, determined by the function dirs_next::cache_dir() from the crate dirs_next.

  • It's a better idea, but we still have to manage different wasm-pack versions and I don't like to use global caches too much because it's difficult to remove old files when they are no longer needed.

So the remaining option is to download wasm-pack directly into the user project. We can store it in the target directory. But I think the best option is the frontend directory. Users can use wasm-pack directly if they need it - e.g. to run tests until the test command is implemented in mzoon.

--

4) How should we download and unpack wasm-pack?

/crates/mzoon/src/helper/download.rs

#[throws]
pub async fn download(url: impl AsRef<str>) -> Vec<u8> {
    reqwest::get(url.as_ref())
        .await?
        .error_for_status()?
        .bytes()
        .await?
        .to_vec()
}
Enter fullscreen mode Exit fullscreen mode
  • reqwest is popular, universal, async and based on tokio.

/crates/mzoon/src/wasm_pack.rs

const DOWNLOAD_URL: &str = formatcp!(
    "https://github.com/rustwasm/wasm-pack/releases/download/v{version}/wasm-pack-v{version}-{target}.tar.gz",
    version = VERSION,
    target = NEAREST_TARGET,
);

// ...

download(DOWNLOAD_URL)
    .await
    .context(formatcp!(
        "Failed to download wasm-pack from the url '{}'",
        DOWNLOAD_URL
    ))?
    .apply(unpack_wasm_pack)
    .context("Failed to unpack wasm-pack")?;

// ...

#[throws]
fn unpack_wasm_pack(tar_gz: Vec<u8>) {
    let tar = GzDecoder::new(tar_gz.as_slice());
    let mut archive = Archive::new(tar);

    for entry in archive.entries()? {
        let mut entry = entry?;
        let path = entry.path()?;
        let file_stem = path
            .file_stem()
            .ok_or(anyhow!("Entry without a file name"))?;
        if file_stem != "wasm-pack" {
            continue;
        }
        let mut destination = PathBuf::from("frontend");
        destination.push(path.file_name().unwrap());
        entry.unpack(destination)?;
        return;
    }
    Err(anyhow!(
        "Failed to find wasm-pack in the downloaded archive"
    ))?;
}
Enter fullscreen mode Exit fullscreen mode
  • The formatcp! macro is exported from the nice library const_format.
  • Decompressing and unpacking is sync (aka blocking), but we still need to wait for wasm-pack installation before we can do anything else (e.g. build frontend). So why complicate our lives with extra wrappers like spawn_blocking? Also we can unblock it later when needed.

And that's all for today!
Thank You for reading and I hope you are looking forward to the next episode.

Martin

P.S.
We are waiting for you on Discord.

Top comments (0)