It's small.
It's fast.
What is it?
Rewritten Zoon!
You can try it by yourself: Live demo
Welcome to the MoonZoon Dev News!
MoonZoon is a Rust full-stack framework. If you want to read about new MZ features, architecture and interesting problems & solutions - Dev News is the right place.
Chapters
- News
- Old Zoon architecture - How React and Rust Hooks work
- New Zoon architecture - Signals: You can do it without a Virtual DOM
- Builder pattern with rules - Yes, builder pattern can support required parameters
- Optimizations - Need for speed. The size matters.
News
-
Zoon API almost doesn't use macros, it's safer, more expressive and compiler-friendly.
-
A new article "Rust on the Frontend and Backend" on the blog Always Bet On Rust.
- "An interview with Martin Kavík, creator of the MoonZoon full-stack framework"
The demo project and heroku-buildpack updated. You can use them as a starting point for your experimental MoonZoon apps.
The MoonZoon benchmark is ready to be merged (at the time of writing) to krausest/js-framework-benchmark, however I want to wait until MoonZoon is more mature.
-
You don't have to be afraid to look at
zoon
andstatic_ref_macro
crates code.- They are clean enough thanks to awesome libraries once_cell, futures-signals and dominator.
Some new APIs, configs, features and Brotli / Gzip compression integrated.
I would like to thank:
-
Pauan for lightning fast resolving of my problems with his libs
futures-signals
anddominator
. - flosse for fighting with Warp in Moon and for MZoon and Moon improvements.
- Alexhuszagh for working on lexical and answering my questions.
This blog post is a bit longer but I hope you'll enjoy it!
Old Zoon architecture
How React and Rust Hooks work
First, I would like to write this sentence to sound clever: "The old architecture was based on topologically-aware functions with stable call graph identifiers and local states stored in a heterogenous vector."
Unfortunately, it's not my original idea. It powers React Hooks. Or moxie. Or Crochet.
And now the explanation what is it and why it doesn't work well enough to stay in Zoon (just like all over-engineered stuff).
--
So let's say we have main
and 3 simple functions:
fn main() {
loop {
amber()
}
}
fn amber() {
mike()
}
fn mike() {
layla_rose();
layla_rose()
}
fn layla_rose() { }
Let's add a counter with some helpers and println
s:
use std::sync::atomic::{AtomicUsize, Ordering};
static COUNTER: AtomicUsize = AtomicUsize::new(0);
fn call_id() -> usize { COUNTER.load(Ordering::SeqCst) }
fn increment_call_id() { COUNTER.fetch_add(1, Ordering::SeqCst); }
fn reset_call_id() { COUNTER.store(0, Ordering::SeqCst) }
fn main() {
for _ in 0..3 {
amber();
reset_call_id()
}
}
fn amber() {
increment_call_id();
println!("amber id: {}", call_id());
mike()
}
fn mike() {
increment_call_id();
println!("mike id: {}", call_id());
layla_rose();
layla_rose()
}
fn layla_rose() {
increment_call_id();
println!("layla_rose id: {}", call_id());
}
When you run the code (Rust Playground), you should see a loop of the sequence:
amber id: 1
mike id: 2
layla_rose id: 3
layla_rose id: 4
Now you can apply some "magic" to our functions with proc macros like this one to hide unnecessary counter helpers. The result will look like:
fn main() {
for _ in 0..3 { run(amber) }
}
#[i_am_special]
fn amber() {
println!("amber id: {}", call_id());
mike()
}
#[i_am_special]
fn mike() {
println!("mike id: {}", call_id());
layla_rose();
layla_rose()
}
#[i_am_special]
fn layla_rose() {
println!("layla_rose id: {}", call_id());
}
But let's get back to our non-macro example and improve it by adding STATES
and the hook use_age
. (Rust Playground)
- Note: The code below may look a bit scary but you don't have to understand all implementation details.
use std::sync::atomic::{AtomicUsize, Ordering};
static COUNTER: AtomicUsize = AtomicUsize::new(0);
fn call_id() -> usize { COUNTER.load(Ordering::SeqCst) }
fn increment_call_id() { COUNTER.fetch_add(1, Ordering::SeqCst); }
fn reset_call_id() { COUNTER.store(0, Ordering::SeqCst) }
use std::{sync::Mutex, collections::HashMap};
use once_cell::sync::Lazy;
static STATES: Lazy<Mutex<HashMap<usize, u8>>> = Lazy::new(Mutex::default);
fn use_age(default_value: impl FnOnce() -> u8 + Copy) -> u8 {
*STATES.lock().unwrap().entry(call_id()).or_insert_with(default_value)
}
fn main() {
for _ in 0..3 {
amber(32);
println!("{:-<28}", "-");
reset_call_id()
}
}
fn amber(age: u8) {
increment_call_id();
let age = use_age(|| { println!("Saving amber's state!"); age });
println!("amber id: {}, age: {}", call_id(), age);
mike(15)
}
fn mike(age: u8) {
increment_call_id();
let age = use_age(|| { println!("Saving mike's state!"); age });
println!("mike id: {}, age: {}", call_id(), age);
layla_rose(26);
layla_rose(22)
}
fn layla_rose(age: u8) {
increment_call_id();
let age = use_age(|| { println!("Saving layla_rose's state!"); age });
println!("layla_rose id: {}, age: {}", call_id(), age);
}
The output:
Saving amber's state!
amber id: 1, age: 32
Saving mike's state!
mike id: 2, age: 15
Saving layla_rose's state!
layla_rose id: 3, age: 26
Saving layla_rose's state!
layla_rose id: 4, age: 22
----------------------------
amber id: 1, age: 32
mike id: 2, age: 15
layla_rose id: 3, age: 26
layla_rose id: 4, age: 22
----------------------------
amber id: 1, age: 32
mike id: 2, age: 15
layla_rose id: 3, age: 26
layla_rose id: 4, age: 22
----------------------------
The main fact: Closures passed to the use_age
hook are invoked only once. use_age
invokes them only if it doesn't find the age from the previous iteration in STATES
.
Another important fact: STATES
is a key-value storage, where the key is call_id and the value is u8
(aka age).
So.. do we have nice React Hooks and the world is smiling?
Yeah, until a wild condition appears...
mike(30)
if day == "good_day" {
layla_rose(26)
} else {
amber(60)
}
The first iteration with a good_day
:
mike(30) // call id == 1 ; age 30 saved
if day == "good_day" {
layla_rose(26) // call id == 2 ; age 26 saved
} else {
amber(60) // not called
}
The next iteration with a bad_day
:
mike(30) // call id == 1 ; age 30 loaded
if day == "good_day" {
layla_rose(26) // not called
} else {
amber(60) // call id == 2 ; age 26 loaded
}
Output (Rust Playground):
Saving mike's state!
mike id: 1, age: 30
Saving layla_rose's state!
layla_rose id: 2, age: 26
----------------------------
mike id: 1, age: 30
amber id: 2, age: 26
----------------------------
mike id: 1, age: 30
layla_rose id: 2, age: 26
----------------------------
You would be really surprised if a Tinder developer accidentally wrapped a React component in a condition and you plan a date with Amber...
That's why there are official React Rules of Hooks:
- Don’t call Hooks inside loops, conditions, or nested functions.
- Don’t call Hooks from regular JavaScript functions.
Our call ids based on a counter / indices are just not stable enough.
Fortunately, Rust offers more tools to fight with hooks limitations.
We can get the Location of the caller. It means we know where exactly in the source code has been a function called. So we can distinguish different calls by their caller, even if their index is equal.
We can leverage newer Rust built-in attribute #[track_caller] in combination with Location::caller. The code is starting to be pretty complex (Rust Playground).
use std::sync::atomic::{AtomicUsize, Ordering};
use std::panic::Location;
static COUNTER: AtomicUsize = AtomicUsize::new(0);
#[track_caller]
fn call_id() -> (usize, &'static Location<'static>) {
(COUNTER.load(Ordering::SeqCst), Location::caller())
}
fn increment_call_id() { COUNTER.fetch_add(1, Ordering::SeqCst); }
fn reset_call_id() { COUNTER.store(0, Ordering::SeqCst) }
use std::{sync::Mutex, collections::HashMap};
use once_cell::sync::Lazy;
static STATES: Lazy<Mutex<HashMap<(usize, &'static Location), u8>>> = Lazy::new(Mutex::default);
#[track_caller]
fn use_age(default_value: impl FnOnce() -> u8 + Copy) -> u8 {
*STATES.lock().unwrap().entry(call_id()).or_insert_with(default_value)
}
fn main() {
for i in 0..3 {
root(if i % 2 == 0 { "good_day" } else { "bad_day" });
println!("{:-<28}", "-");
reset_call_id()
}
}
fn root(day: &str) {
mike(30);
if day == "good_day" {
layla_rose(26)
} else {
amber(60)
}
}
#[track_caller]
fn mike(age: u8) {
increment_call_id();
let age = use_age(|| { println!("Saving mike's state!"); age });
println!("mike id: {:?}, age: {}", call_id(), age);
}
#[track_caller]
fn amber(age: u8) {
increment_call_id();
let age = use_age(|| { println!("Saving amber's state!"); age });
println!("amber id: {:?}, age: {}", call_id(), age);
}
#[track_caller]
fn layla_rose(age: u8) {
increment_call_id();
let age = use_age(|| { println!("Saving layla_rose's state!"); age });
println!("layla_rose id: {:?}, age: {}", call_id(), age);
}
Updated output (notice Amber's age and Saving amber's state!
):
Saving mike's state!
mike id: (1, Location { file: "src/main.rs", line: 29, col: 5 }), age: 30
Saving layla_rose's state!
layla_rose id: (2, Location { file: "src/main.rs", line: 31, col: 9 }), age: 26
----------------------------
mike id: (1, Location { file: "src/main.rs", line: 29, col: 5 }), age: 30
Saving amber's state!
amber id: (2, Location { file: "src/main.rs", line: 33, col: 9 }), age: 60
----------------------------
mike id: (1, Location { file: "src/main.rs", line: 29, col: 5 }), age: 30
layla_rose id: (2, Location { file: "src/main.rs", line: 31, col: 9 }), age: 26
----------------------------
To make the code more robust we'll need to track also ancestors. Otherwise we may have calls with equal index and direct callers but different callers of callers... So we need to create a simple blockchain where each call has a hash of the previous call (yeah, another buzzword for SEO..)
However Nemesis for all Javascript and Rust Hooks are loops. Different calls in loops may have equal both index and location. It means we need another factor to correctly distinguish calls - keys
. Unfortunately they need to be provided by developer because they depend on application data.
Many frameworks (with or without Hooks) support keys
:
When the developer forgets to define keys, the app may be slower or doesn't work as expected (look at the Svelte demonstration above).
So... from the frontend app developer point of view, Hooks (especially Rust ones) may be a useful tool to reduce boilerplate and introduce local state for function-based components, but the developer has to follow some artificial rules.
--
Let's move to the technical challenges of Hooks.
Complexity. It's pretty hard to implement Hooks correctly, especially due to many edge-cases and macros. It also mean a lot of code bloat if you are not careful enough.
Good luck with Hooks integration into the framework with asynchronous rendering - you may get lost in the Dark caller forest (just a note from MoonZoon trenches).
We were working only with the age in our examples. Hooks have to support as many types as possible (not only
u8
). It means we need a heterogenous storage for user data, probably based on Any (if you know Typescript any, RustAny
may open similar gates of hell).It's pretty hard to start a new iteration in a non-root "node" (when you want to invoke only the function in the call graph representing a component with changed data).
-
However the Hooks Achilles' heel is the storage performance. When you decide to use the simplest solution - a
HashMap
withBox<dyn Any>
as values:-
Box
andAny
means a lot of type gymnastics and checks and allocations. -
HashMap
's default hash function isn't the fastest one, but the replacement with a faster non-secure one didn't help to increase speed in practice. -
HashMap
resizing is pretty slow - it has to move all its items to the new location after the reallocation. griddle helps to eliminate resizing spikes, but it doesn't help too much with the overall speed.
-
As the result, Zoon's code was a slow spaghetti monster. It was working good enough for cca 2_000 elements, but when there were more complex business logic and more elements then the app becomes too slow for comfortable usage.
I've also tried more mature libraries instead of my code but the performance didn't change too much.
Then I remembered the term Sunk cost fallacy from the awesome book Thinking, Fast and Slow and with the words "Don't love your code, no code no bugs" I selected most Zoon files and hit my favorite key: Delete
.
New Zoon architecture
Signals: You can do it without a Virtual DOM
So Hooks was a dead end. The Elm architecture has its own problems (explained in the previous post). I don't want to invent another complex component system with templates. What now?
Let's learn from the past and see what works and what doesn't.
Hooks - Simple creation of local states helps to write element/component libraries and don't pollute our business data with GUI-specific variables.
TEA - Single-source of truth (aka
Model
) eliminates bugs related to state synchronization.TEA - Asynchronous "pipelines" may be hard to follow in the source code without an
await/async
mechanism. Imagine a chain of HTTP requests with error handling and some business logic.-
Many frameworks / GUI libraries often try to store and manage all objects representing elements/components by themselves and use the target platform only as a "canvas" where they render elements.
- Why write a custom DOM when we still need to use the browser DOM? The custom DOM then basically becomes a cache. And what are the most difficult things in computer science?
- Why to store and manage objects when we only want to render a HTML string for a Google bot?
Passing properties down to child elements/components may lead to boilerplate (TEA) and then to cumbersome abstractions (many frameworks). TEA-like frameworks try to mitigate it with Pub/Sub mechanisms.
There are often problems with keys for element/component lists (explained in the previous chapter).
-
Virtual DOM + Asynchronous rendering (the render waits for the next animation frame)
- Adds a lot of complexity and causes bugs.
- The typical bug in most frameworks is a "jumping cursor" in text inputs (Elm issue with demonstation, React explanation).
- Text selection is pretty hard to manage in the browser, especially with async rendering.
-
Many native browser elements behave quite unpredictably and it's very hard to set them correctly. There has to be a layer above them to protect the app developer.
- "Did you know #456: Setting element attributes is order-sensitive?"
Now I'll show you 4 examples with a new Zoon API and explain how they work. Then we'll discuss how the API corresponds with the notes above.
Example 1
use zoon::*;
#[static_ref]
fn counter() -> &'static Mutable<i32> {
Mutable::new(0)
}
fn increment() {
counter().update(|counter| counter + 1)
}
fn decrement() {
counter().update(|counter| counter - 1)
}
fn root() -> impl Element {
Column::new()
.item(Button::new().label("-").on_press(decrement))
.item(Text::with_signal(counter().signal()))
.item(Button::new().label("+").on_press(increment))
}
#[wasm_bindgen(start)]
pub fn start() {
// We want to attach our app to the browser element with id "app".
// Note: `start_app(None, root);` would attach to `body` but it isn't recommended.
start_app("app", root);
}
The function counter()
is marked by the attribute #[static_ref]
. It means the function is transformed by a procedural macro into this:
fn counter() -> &'static Mutable<i32> {
use once_cell::race::OnceBox;
static INSTANCE: OnceBox<Mutable<i32>> = OnceBox::new();
INSTANCE.get_or_init(move || Box::new(Mutable::new(0)))
}
- The macro is defined in the crate
static_ref_macro
in the MoonZoon repo. - The macro currently uses
OnceBox
. It may useOnceCell
or probablylazy_static!
. - You can deactivate the macro by a Zoon feature flag
static_ref
.
Mutable
is very similar to RwLock. However it has one unique feature - it sends a signal on change. Let's explain it on the Text
element.
There are multiple ways to create a new Text
element:
.item(counter().get())
.item(Text::new(counter().get())
.item(Text::with_signal(counter().signal()))
-
Note: The method
.item
expects theimpl IntoElement
parameter. Many Rust basic types (&str
,Cow<str>
,i32
, ..) implementIntoElement
by creating a newText
.
The first two lines are practically the same. They just creates a Text
element with a static value. It means the text doesn't change at all once set. We can only replace the Text
element with a new one if we want to change it.
The third line is more interesting. Text
created with the method with_signal
rerenders its text when it receives a new value from the chosen signal. Mutable
transmits its value to all associated signals when the value has been changed. We can say that Text
created by with_signal
has a dynamic value.
--
Example 2
use zoon::*;
use std::rc::Rc;
fn root() -> impl Element {
let counter = Rc::new(Mutable::new(0));
let on_press = clone!((counter) move |step: i32| *counter.lock_mut() += step);
Column::new()
.item(Button::new().label("-").on_press(clone!((on_press) move || on_press(-1))))
.item_signal(counter.signal())
.item(Button::new().label("+").on_press(move || on_press(1)))
}
This example works exactly like the previous one but there are some differences in the code.
-
counter
isn't stored in a static reference / global variable, but created as a local variable.- Soo... where is it stored?? In the browser DOM!
Button::new
creates immediately a new DOM node and ourcounter
is passed into itson_press
handler. It's possible because theroot
function is invoked only once to build the app / create the DOM.
- Soo... where is it stored?? In the browser DOM!
-
counter
'sMutable
is wrapped inRc
.- We need to pass the same
counter
into twoon_press
handlers. OtherwiseRc
wouldn't be necessary.
- We need to pass the same
-
There is a
clone!
macro.- It's just an alias for
enc!
macro in the enclose crate. I hope Rust will support cloning into closures natively. - The
clone!
macro is active when the Zoon's feature flagclone
is enabled.
- It's just an alias for
-
counter().update(|counter| counter - 1)
has been replaced with*counter.lock_mut() += step
.- You probably wouldn't find the method
update
infutures-signals
docs - there are traits likeMutableExt
in the Zoon with such helpers. - Be careful with
lock_*
methods. There are cases where it's a bit hard to predict in Rust when the lock is unlocked / dropped (you'll find an example in the next chapter). Alsofutures-signals
crate currently usesstd::sync::RwLock
under the hood that doesn't output a nice error message to the console (especially in Firefox) so it may be hard to track the problem of trying to lock already lockedMutable
. (I was talking about it with thefutures-signals
's author, it should be less confusing in the future.)
- You probably wouldn't find the method
--
Example 3
...
type ID = usize;
struct Row {
id: ID,
label: Mutable<String>,
}
#[static_ref]
fn rows() -> &'static MutableVec<Arc<Row>> {
MutableVec::new()
}
fn remove_row(id: ID) {
rows().lock_mut().retain(|row| row.id != id);
}
...
fn table() -> RawEl {
...
RawEl::new("tbody")
.attr("id", "tbody")
.children_signal_vec(
rows().signal_vec_cloned().map(row)
)
...
}
fn row(row: Arc<Row>) -> RawEl {
let id = row.id;
...
row_remove_button(id),
...
}
fn row_remove_button(id: ID) -> RawEl {
...
RawEl::new("a")
.event_handler(move |_: events::Click| remove_row(id))
...
}
The most interesting are these two parts:
// from `table()`
.children_signal_vec(
rows().signal_vec_cloned().map(row)
)
// from `remove_row(id: Id)`
rows().lock_mut().retain(|row| row.id != id)
RawEl::children_signal_vec
updates its child elements according to the input signal. The signal comes from a MutableVec
returned from rows()
. The most important fact is that this signal transmits only differences between the old and the updated vector. It means it's fast because it doesn't have to clone the entire vector on every change and it can transmit only the child index in the case of removing.
Note: RawEl
is a "low-level element". It means RawEl
is used as a foundation for other Zoon elements like Row
and Button
. Only the element Text
is based on RawText
. Both RawEl
and RawText
implement Element
and From for RawElement
. There will be probably also a RawSvgEl
in the future. The idea is all raw elements can write directly to the browser DOM or to String
as needed.
--
Example 4
// ----- app.rs -----
// ------ ------
// Statics
// ------ ------
#[static_ref]
fn columns() -> &'static MutableVec<()> {
MutableVec::new_with_values(vec![(); 5])
}
#[static_ref]
fn rows() -> &'static MutableVec<()> {
MutableVec::new_with_values(vec![(); 5])
}
// ------ ------
// Signals
// ------ ------
fn column_count() -> impl Signal<Item = usize> {
columns().signal_vec().len()
}
fn row_count() -> impl Signal<Item = usize> {
rows().signal_vec().len()
}
pub fn counter_count() -> impl Signal<Item = usize> {
map_ref!{
let column_count = column_count(),
let row_count = row_count() =>
column_count * row_count
}
}
// ----- app/view.rs -----
fn counter_count() -> impl Element {
El::new()
.child_signal(super::counter_count().map(|count| format!("Counters: {}", count)))
}
This example demonstrates how to combine multiple signals into one.
For more info about signals, mutables, map_ref
and other entities I recommend to read the excellent tutorial in the futures-signals
crate.
Note: If you remember the old Zoon API: Statics
replace SVars
; Signals
replace Caches
.
--
You've seen all examples, let's revisit our notes:
-
Hooks - Simple creation of local states helps to write element/component libraries and don't pollute our business data with GUI-specific variables.
-
let counter = Rc::new(Mutable::new(0));
or an equivalent withoutRc
seems to be a good way to create a local state.
-
-
TEA - Single-source of truth (aka
Model
) eliminates bugs related to state synchronization.- static refs or Rust atomics and "update functions" (like
increment
in our counter example) should be a good alternative toModel
+update
.
- static refs or Rust atomics and "update functions" (like
-
TEA - Asynchronous "pipelines" may be hard to follow in the source code without an
await/async
mechanism. Imagine a chain of HTTP requests with error handling and some business logic.-
futures-signals
is based, well, onfutures
. You can write (according to the official docs)my_state.map_future(|value| do_some_async_calculation(value));
. You can create alsoStreams
and much more.
-
-
Many frameworks / GUI libraries often try to store and manage all objects representing elements/components by themselves and use the target platform only as a "canvas" where they render elements.
-
Raw elements writes directly to the browser DOM and stores the state inside it. They'll be able to write also to
String
in the future.
-
Raw elements writes directly to the browser DOM and stores the state inside it. They'll be able to write also to
-
Passing properties down to child elements/components may lead to boilerplate (TEA) and then to cumbersome abstractions (many frameworks). TEA-like frameworks try to mitigate it with Pub/Sub mechanisms.
- You'll be able to combine static refs, signals, standard Rust constructs and maybe Zoon's channels to eliminate the boilerplate.
-
There are often problems with keys for element/component lists (explained in the previous chapter).
- Do you remember
RawEl::children_signal_vec
from the Example 3? No keys - no problems.
- Do you remember
-
Virtual DOM + Asynchronous rendering (the render waits for the next animation frame)
- No VDOM, no async rendering - no problems. However I can imagine Zoon will need to support async rendering, but ideally it would be used only when the app developer creates animations.
-
Many native browser elements behave quite unpredictably and it's very hard to set them correctly. There has to be a layer above them to protect the app developer.
- Two layers should shield the app developer. Standard Zoon elements (
Button
,Row
..) is the first layer and raw elements (RawEl
,RawText
) is the second one.
- Two layers should shield the app developer. Standard Zoon elements (
Builder pattern with rules
Yes, builder pattern can support required parameters
Button::new().label("X").label("Y")
El::new().child("X").child("Y")
- Will the button be labeled "Y" or "XY"?
- Will the el's children be rendered in a row or in a column?
error[E0277]: the trait bound `LabelFlagSet: FlagNotSet` is not satisfied
--> frontend\src\lib.rs:17:30
|
17 | Button::new().label("X").label("Y");
| ^^^^^ the trait `FlagNotSet` is not implemented for `LabelFlagSet`
error[E0277]: the trait bound `ChildFlagSet: FlagNotSet` is not satisfied
--> frontend\src\lib.rs:18:26
|
18 | El::new().child("X").child("Y");
| ^^^^^ the trait `FlagNotSet` is not implemented for `ChildFlagSet`
The Rust compiler doesn't allow us to write the code that would break Button
or El
rules. Only one label and one child makes sense for Button
and El
.
The compilation also fails when you don't set the label or child at all:
fn root() -> impl Element {
El::new()
}
error[E0277]: the trait bound `zoon::El<ChildFlagNotSet>: zoon::Element` is not satisfied
--> frontend\src\lib.rs:16:14
|
16 | fn root() -> impl Element {
| ^^^^^^^^^^^^ the trait `zoon::Element` is not implemented for `zoon::El<ChildFlagNotSet>`
|
= help: the following implementations were found:
<zoon::El<ChildFlagSet> as zoon::Element>
Yeah, we may have constructors like El::new(..)
with many parameters instead. But then we also need at least El::with_child_signal(..)
. And other constructors for more complex elements with more required parameters and their combinations. It becomes cumbersome very quickly.
Note: There are exceptions in the Zoon API like RawEl::new("div")
and Text::new("text")
because it's not possible to even create a builder for these types without the most important input data.
Why we can't just take the last value as the valid one? E.g. Button::new().label("X").label("Y");
would be a button labeled "Y".
- All methods (
.label(..)
,.child(..)
) modifies the DOM immediately. It means we would need to delete the previous label and it would be pretty inefficient. - It will be confusing -
El
can have only one child, butRow
accepts multiple children.
Why all methods modifies the DOM immediately?
- I've tried to store element builder arguments in the builder and render it at once later. However this approach leads to slow and cumbersome elements and it's almost impossible in some cases.
--
How those rules work?
Let's look at the current Button
implementation.
use zoon::*;
use std::marker::PhantomData;
// ------ ------
// Element
// ------ ------
make_flags!(Label, OnPress);
pub struct Button<LabelFlag, OnPressFlag> {
raw_el: RawEl,
flags: PhantomData<(LabelFlag, OnPressFlag)>
}
impl Button<LabelFlagNotSet, OnPressFlagNotSet> {
pub fn new() -> Self {
Self {
raw_el: RawEl::new("div")
.attr("class", "button")
.attr("role", "button")
.attr("tabindex", "0"),
flags: PhantomData,
}
}
}
impl<OnPressFlag> Element for Button<LabelFlagSet, OnPressFlag> {
fn into_raw_element(self) -> RawElement {
self.raw_el.into()
}
}
// ------ ------
// Attributes
// ------ ------
impl<'a, LabelFlag, OnPressFlag> Button<LabelFlag, OnPressFlag> {
pub fn label(
self,
label: impl IntoElement<'a> + 'a
) -> Button<LabelFlagSet, OnPressFlag>
where LabelFlag: FlagNotSet
{
Button {
raw_el: self.raw_el.child(label),
flags: PhantomData
}
}
pub fn label_signal(
self,
label: impl Signal<Item = impl IntoElement<'a>> + Unpin + 'static
) -> Button<LabelFlagSet, OnPressFlag>
where LabelFlag: FlagNotSet
{
Button {
raw_el: self.raw_el.child_signal(label),
flags: PhantomData
}
}
pub fn on_press(
self,
on_press: impl FnOnce() + Clone + 'static
) -> Button<LabelFlag, OnPressFlagSet>
where OnPressFlag: FlagNotSet
{
Button {
raw_el: self.raw_el.event_handler(move |_: events::Click| (on_press.clone())()),
flags: PhantomData
}
}
}
make_flags!(Label, OnPress);
generates code like:
struct LabelFlagSet;
struct LabelFlagNotSet;
impl zoon::FlagSet for LabelFlagSet {}
impl zoon::FlagNotSet for LabelFlagNotSet {}
struct OnPressFlagSet;
struct OnPressFlagNotSet;
impl zoon::FlagSet for OnPressFlagSet {}
impl zoon::FlagNotSet for OnPressFlagNotSet {}
The only purpose of flags is to enforce rules by the Rust compiler.
The compiler doesn't allow to call label
or label_signal
if the label is already set. The same rule applies for on_press
handler.
The trade-off for compile-time checked rules are generics. However it isn't a big problem in practice because in view you often return elements from a function by impl Element
. And when you really need to box them because you want to use them in a collection or in match
/ if
arms, then you can because Element
trait is object safe for these purposes.
Another trade-off could be a bit cryptic error messages but I think they aren't too bad and maybe we'll be able to improve them.
--
What about API with macros?
Column::new()
.item(Button::new().label("-").on_press(decrement))
.item(Text::with_signal(counter().signal()))
.item(Button::new().label("+").on_press(increment))
vs
col![
button![button::on_press(decrement), "-"],
text![counter().signal()],
button![button::on_press(increment), "+"],
]
Macro API advantages:
- Less verbosity / boilerplate in most cases.
- Can accept more types than standard functions thanks to "tricks" (e.g. implementing different traits with the same methods for different types) to resolve conflicting
impl
s to achieve a simpler specialization. Note: You can see this trick in action in Seed'sUpdateEl*
traits that power its element macros. - It can protect from locks-related problems. An example:
fn root() -> impl Element {
Column::new()
.item(*counter().lock_ref())
.item(Text::with_signal(counter().signal()))
}
The lock from lock_ref
isn't dropped soon enough so the hidden locking in .signal()
fails in runtime.
We can resolve it manually:
// By a closure
.item((|| *counter().lock_ref())())
// By an extra `let` bindings
.item({ let lock = counter().lock_ref(); *lock })
Notes:
- I hope Rust compiler will be clever enough to resolve it in the future by itself and also provide a more descriptive error.
- Javascript guys come again with many weird names! This time for a self-invoking closure you've seen in the example above.
Macro API disadvantages:
- Less compiler friendly (cryptic errors) and less auto-complete / IDE friendly.
- May cause code bloat.
- Complicates element implementations.
- Didn't pass the "girlfriend test" (A non-developer person with a good graphic taste points the finger to the nicer code when two same examples with different APIs are presented for examination.)
- Hard to learn for beginners.
- Hard to maintain.
Optimizations
Need for speed. The size matters.
Speed
The most Rust tutorials and best practices are focused on speed. It means you can just follow general recommendations and pick the most used libraries and there is a chance everything will be fast.
The simplest way to increase speed is to set your Cargo.toml
correctly. Example:
[profile.release]
# Enable link time optimizations - slow compilation, faster & smaller app
lto = true
# Disable parallel compilation (set to 1 thread) - slow compilation, faster & smaller app
codegen-units = 1
# Set optimization level - 3 => fast app ; s/z => small app
opt-level = 3
# O4 => fast app ; Oz/Os => small app
# [See the explanation below]
[package.metadata.wasm-pack.profile.release]
wasm-opt = ['-O4']
MoonZoon CLI (mzoon
) uses wasm-pack to build your frontend app with Zoon. wasm-pack
downloads and manages tools like wasm-bindgen CLI and wasm-opt and browser drivers for testing.
wasm-bindgen
CLI and the library do the hard work to connect Javascript with Rust / Wasm.wasm-pack
/wasm-bindgen CLI
generates a Javascript file to "boot" your app stored in the Wasm file.wasm-bindgen
also can generate Typescript types or JS files from your JS snippets defined in the Rust code.wasm-opt
is a tool for Wasm file optimizations. It can improve speed, but it's excellent in size reduction. Note: It's written in C++.
wasm-pack
can be configured in Cargo.toml
. And it automatically installs also the required compilation target wasm32-unknown-unknown
.
When you run mzoon start
or mzoon build
, MZoon checks if the wasm-pack
is installed on your system and then runs
wasm-pack --log-level warn build frontend --target web --no-typescript --dev
to compile your app.
Note: mzoon
will be able to install wasm-pack
automatically in the future.
--
Ok, we have an idea how the Wasm Rust app compilation work and we set the most important options in Cargo.toml
.
However let's look again at our Cargo.toml
. I recommend to disable default-features
and search for feature flags in docs and source code in your dependencies.
Example from js-framework-benchmark
:
[dependencies]
zoon = { path = "../../../../crates/zoon", features = ["static_ref", "fmt"], default-features = false }
rand = { version = "0.8.3", features = ["small_rng", "getrandom"], default-features = false }
getrandom = { version = "0.2", features = ["js"], default-features = false }
When you enable only the needed features, you can reduce compilation speed.
Many creates offer features that optimize the crate itself and its dependencies for a particular platform (embedded, Wasm) or attribute (speed / size).
-
You often need to look into the source code because many feature flags and conditions aren't documented or visible on docs.rs. Examples:
-
rand
needs the flaggetrandom
andgetrandom
needs the flagjs
to not fail in runtime in Wasm. -
parking_lot
showswasm-bindgen
flag in its docs.rs docs but the README says: "The wasm32-unknown-unknown target is only supported on nightly and requires -C target-feature=+atomics in RUSTFLAGS" -
wasm-bindgen
showsstd
flag in docs, but it doesn't work without it. On the other hand, you can enable the feature flagenable-interning
(will be explained later)
-
--
Zoon's current features are:
[features]
default = ["static_ref", "panic_hook", "small_alloc", "clone"]
static_ref = ["static_ref_macro", "once_cell"]
panic_hook = ["console_error_panic_hook"]
small_alloc = ["wee_alloc"]
fast_alloc = []
# tracing_alloc = ["wasm-tracing-allocator"]
clone = ["enclose"]
fmt = ["ufmt", "lexical"]
The features related to performance:
small_alloc
or fast_alloc
or tracing_alloc
The default allocator for Rust Wasm is currently dlmalloc. You can choose a different allocator (see the list of allocators), but only a couple of them are compatible with Wasm.
I haven't found a Wasm-compatible allocator faster then the default dlmalloc
. So when you enable the flag fast_alloc
, compilation fails with the message ""Do you know a fast allocator working in Wasm?"".
The flag small_alloc
is enabled by default. It means your app will use a bit slower but small wee_alloc.
The flag tracing_alloc
would switch to the wasm-tracing-allocator.
"wasm-tracing-allocator enables you to better debug and analyze memory leaks and invalid frees in an environment where we don't have access to the conventional tools like Valgrind."
I tried it, it works but I didn't find it very useful for me. I'll integrate it into Zoon if we find some reasons in the future.
fmt
This feature enables dependencies ufmt and lexical. It could replace std::fmt
machinery (Debug
, format!
, println!
) however I'll probably focus on it in another MoonZoon dev iteration. You'll see some other fmt
-related notes later, in the Size
section.
--
Now we can finally talk about your application code.
There are some recommendation for Wasm + JS:
Wrap
&str
in intern where it makes sense. It caches strings in JS to mitigate slow string passing through the Rust-JS "bridge" created bywasm-bindgen
. Zoon (more accurately dominator) interns automatically many element arguments so you don't need to do it by yourself in most cases.Be careful with sending strings and more complex items from and to JS world. It may be slow because of encoding and serialization. And it may cause some boilerplate in the app because it's often needed to tell
wasm-bindgen
how your items should be serialized for export to JS. Note: This problem should be mitigated in the future by richer Wasm API that allows faster Wasm-JS communication.Use
unchecked_*
alternatives where it makes sense - see, for instance, wasm_bindgen::JsCast.
And some general recommendations:
Reduce memory allocations as much as possible. It means generics instead of
Box
, arrays instead of vectors and similar stuff.Reduce the number of expensive
.clone
,.to_owned
,.to_string
,.collect
,.into
.., calls.Reduce reallocations. Try to call, for instance, Vec::with_capacity instead of
Vec::new
/vec![]
where possible.Pick the most suitable algorithms / structures - e.g.
HashMap
vsIndexMap
vsHashMap
with a non-secure hash function vsBTreeMap
vsSlotMap
, etc. It makes sense only when you've prepared benchmarks - results could be quite surprising. Tip: Watch out forprintln!
calls in your benchmarks, console operations could be pretty slow.There are libraries like smallvec or im or fst which helps A LOT if you know how and where to use them.
Don't use
Rc
,RefCell
,Mutex
and similar stuff if you don't have to.Create errors or default values lazily where it makes sense - e.g. call
Result::unwrap_or_else
instead ofResult::unwrap_or
orOption::map_or_else
instead ofOption::map_or
.
--
Recommendations for all web apps:
Preloading
Moon generates index.html
very similar to this one:
<head>
...
<link rel="preload" href="/pkg/frontend_bg_{id}.wasm" as="fetch" type="application/wasm" crossorigin>
<link rel="modulepreload" href="/pkg/frontend_{id}.js" crossorigin>
{head_extra}
</head>
<body>
...
<script type="module">
import init from '/pkg/frontend_{id}.js';
init('/pkg/frontend_bg_{id}.wasm');
</script>
</body>
Notice preload and modulepreload (don't ask me why they are two distinct names, HTML and browser APIs is one big mystery to me).
The browser will be downloading files marked with preload / moduleprelod
even if it needs to resolve scripts and styles hidden under the placeholder {head_extra}
before it can move to process body
.
When the browser starts to import frontend_{id}.js
or init frontend_bg_{id}.wasm
, it doesn't have to download these files because they have been already preloaded (or they are loading).
Note: There shouldn't be a large time span between preloading and using the files (it may happen when there is a script in {head_extra}
hosted on a slow server). Otherwise the browser may show a warning or maybe don't match the files at all.
HTTP/2
There are many reasons why to use HTTP/2 instead of HTTP/1.1. See the basic list of improvements and benchmarks in the article HTTP/2 vs HTTP/1 - Performance Comparison. HTTP/2 is also important for MoonZoon because it allows more SSE connections than HTTP/1.
When you start your MoonZoon app or a MoonZoon example (see Development.md), it runs on HTTP/1.1
by default. You can check it in the browser developer tools, in the tab Network
when you add the column Protocol
.
To enable HTTP/2 you have to enable HTTPS. Modify the file MoonZoon.toml
in your project or in a MoonZoon example:
# port = 8080
port = 8443
https = true
# ...
then start the server (makers mzoon start
for examples) and go to https://localhost:8443. Accept a potential security risk caused be a self-signed certificate and you should see HTTP/2
(Firefox) or h2
(Chrome) in the dev tools.
--
Aaaand how can we measure performance?
Learn to use the browser tools. Chrome dev tools are probably the best - tutorial: Analyze runtime performance.
You can use web_sys::Performance to measure individual functions and your business logic speed. See related MDN docs.
I'm sure you'll be able to find some Rust or Javascript benchmark libraries suitable for Wasm. (Don't hesitate to share your experience.)
Tips:
Keep in mind that
debug
build is often MUCH slower than therelease
and optimized one, but it contains debug info needed to show function names and other data by profilers / benchmarks.There are cases when optimization for size results in higher speed - always test and measure your changes and avoid premature optimization.
Size
There are 2 things that very likely increase the size A LOT - dependencies and macros.
Dependencies
Optimized counter
example (makers mzoon build -r
):
- Without additional deps
- 33 KB (GZip: 16 KB, Brotli: 14 KB, debug: 445 KB)
- With one formatting call
1.2.to_string()
- 52 KB (GZip: 24 KB, Brotli: 21 KB, debug: 468 KB)
- With the
url
crate- 338 KB (GZip: 144 KB, Brotli: 113 KB, debug: 1239 KB)
- With the
url
andregex
crate- 928 KB (GZip: 326 KB, Brotli: 236 KB, debug: 3601 KB)
So if one of your dependency calls format!
or .to_string
on a float number, expect cca 20 KB larger Wasm file. If you want to use libraries like reqwest, expect more than 300 KB of extra binary size because it uses url
crate.
So I recommend to first look at your dependencies and try to find popular, but large libraries like url, regex and serde. Also some parts of std
contributes to the code bloat, especially std::fmt
.
Try to find alternatives - e.g. ufmt for std::fmt
or serde-lite for serde
.
- Zoon hides
ufmt
and lexical for float number formatting behind a non-default feature flag. I'll probably add alsoserde-lite
and integrate them properly in the future.
Or you can try to use the browser API instead of Rust libs - e.g. js_sys::RegExp instead of regex
or web_sys::Url instead of url
.
Macros
Macros are basically code generators. Be prepared for larger binaries if you use them. Don't forget that attributes like #[derive(Debug)]
could generate a lot of code.
If you need to write your custom macros, try to extract as much code as possible to new functions.
Panics / errors
The most panic-related code is fortunately removed by wasm-opt
, but not all of them. However we can help it:
Call expect_throw
and unwrap_throw
instead of standard expect
and unwrap
. See wasm_bindgen::UnwrapThrowExt.
In the Zoon code, there is registered a panic hook:
pub fn start_app ... {
#[cfg(feature = "panic_hook")]
#[cfg(debug_assertions)]
console_error_panic_hook::set_once();
This panic hook is useful for debugging because it shows panic errors in console log. However we don't need it in the release
build. wasm-opt
can't remove it by itself so we should mark all debug helpers that should be omitted in the release
build by #[cfg(debug_assertions)]
or a similar compile-time condition.
Note: We need console_error_panic_hook
because panics aren't automatically redirected to the console log. There are many std
APIs that just do nothing in Wasm. That's why you need to, for instance, add use zoon::{*, println};
if you want to call println
in your app.
Allocators
We were already talking about allocators. When you enable Zoon's feature small_alloc
(it's enabled by default), then wee_alloc allocator is used.
Related Zoon's code:
#[cfg(feature = "small_alloc")]
#[global_allocator]
static ALLOC: wee_alloc::WeeAlloc = wee_alloc::WeeAlloc::INIT;
Cargo.toml config
It's very similar to optimization for speed:
[profile.release]
lto = true
codegen-units = 1
opt-level = 's'
[package.metadata.wasm-pack.profile.release]
wasm-opt = ['-Os']
You need to experiment with values 's'
/ ['-Os']
and 'z'
/ ['-Oz']
. Sometimes s
makes the app smaller than z
and even faster then 3
. It depends on your app and maybe on the weather. Who knows.
Generics
There is a nice popular word in the Rust world - monomorphization.
An excerpt from the Rust book, section Performance of Code Using Generics:
- "You might be wondering whether there is a runtime cost when you’re using generic type parameters. The good news is that Rust implements generics in such a way that your code doesn’t run any slower using generic types than it would with concrete types."
Well, it's a bad news for us. Explained in the docs of a very useful code size profiler for Wasm Twiggy:
- "Generic functions with type parameters in Rust and template functions in C++ can lead to code bloat if you aren't careful. Every time you instantiate these generic functions with a concrete set of types, the compiler will monomorphize the function, creating a copy of its body replacing its generic placeholders with the specific operations that apply to the concrete types."
So it basically says we shouldn't use generics if we want to optimize for size because of inlining. However there are two problems:
Generics are often used in your dependencies - out of your control. E.g. Twiggy says that most code bloat because of generics is caused by the crate
futures-signals
in Zoon'sjs-framework-benchmark
example.When I was trying to optimize size by replacing generics with other constructs, the app was becoming slower and paradoxically also bigger. So I wouldn't recommend to focus too much on this optimization if you aren't sure it'll really reduce the Wasm file size.
Compression
Browsers support multiple kinds of compression, always at least Gzip or Brotli.
MoonZoon CLI (mzoon
) automatically compresses wasm
and other files with both algorithms during the release
build. And then Moon serves them according to the header Accept-Encoding extracted from incoming requests.
You've already saw examples above, but let's look again:
- A small optimized app: 33 KB - GZip: 16 KB - Brotli: 14 KB
- A large optimized app: 928 KB - GZip: 326 KB - Brotli: 236 KB
This way you can significantly reduce traffic between frontend and backend.
Note: Firefox and probably other browsers support Brotli only on HTTPS. Chrome supports both Gzip and Brotli also on HTTP. It means you can't use only Brotli for all cases.
Dev Note: It's difficult to serve files according to a header from Warp.
--
Why are MoonZoon apps optimized for size by default?
- Storing small files is cheaper. For you and for hostings / CDNs - it means there is also a higher probability the files will be cached longer on such services.
- Sending small files are cheaper. It means you pay less for bandwidth and there will be lower traffic so you'll save money on servers.
- Users using a pay-per-use internet connection are happier.
- Users with slow internet are happier.
- Better SEO thanks to faster page load. (Applies if the bot can run Wasm/JS and prerendering/SSR is disabled.)
- Rust / Wasm is fast enough for almost all cases even when optimized for size.
- My WiFi signal is weak in the kitchen.
And that's all for today!
Thank You for reading and I hope you are looking forward to the next episode.
Martin
P.S.
We are waiting for you on Discord.
Top comments (3)
Awesome!
I really enjoyed this blogpost, especially the detailed explanation of the architectural differences and all the advantages/disadvantages.
I'm looking forward to creating my next web app with MoonZoon! :)
A batch of important notes and ideas for next dev interation extracted from our chat with Pauan:
Mutable
doesn't have to be wrapped inRc
since internally it'sArc
. HoweverMutableVec
has to be wrapped. (And I agree with him it would be nice if Rust had two differentClone
traits - one for reference cloning and one for deep value cloning.)let
binding and self-invoking closures demonstrated in one of the examples shouldn't work for non-Copy types.A good note from Pauan:
--
btw, another issue you didn't mention in your article... events are often really problematic in VDOM
because in React, if you do something like this...
<Foo onclick={(e) => { ... }} />
every time render is called, it will create a fresh new closure
that's already not great, but it gets even worse
because now React has to unbind the old onclick handler and add the new onclick handler
and it has to do this on every single render
which is why it's common practice to do things like
this.onclick = this.onclick.bind(this);
so that way you can do
<Foo onclick={this.onclick} />
so that way the closure doesn't change, so React doesn't have to update it