TL;DR
- I’ve created async versions of JSON
stringify
andparse
plus a whole bunch of array functions, includingsort
, that don’t block the main thread - I've recently added support for LZ compress and decompress
- I’ve turned these into a library you can use easily in your own code and it will work with all frameworks that can use ES6 or transpile it.
- Works on IE11
- You can also build your own coroutines that exhibit similar behaviours in your own complex logic or processing
- You can also use high priority coroutines for animations that use imperatives like
for
andwhile
loops - Available on MIT licence see the home page
- I’ve written below about how this all works and how I figured it out thanks to dev.to inspiration
Demo
This demo shows multiple parallel coroutines on the main thread.
Slow is smooth and smooth is fast
We all know that user reported performance is massively impacted by our perception of speed. So if a smooth animation suddenly glitches it’s important. If a user decides they clicked the wrong button, they’d probably like the opportunity to abort or change their choice without waiting seconds for results that they no longer want. These are all about user experience and performance perception. In fact the speed at which processing happens is less important than the perception of smoothness. We could blow another 500ms doing something so long as the interface is slick and responsive and the user would think that the app was faster than one that completed quicker but was as janky as an old jallopy.
We often write code that has to do some data processing on the front end, especially in SPA apps. If we find ourselves sorting or processing lots of data then it’s very easy to cause glitches and significant delays. They can be a nightmare to debug and happen differently depending on the hardware the user has.
Threading
With Worker
threads we can offload a bunch of processing to another thread and it will not impact the performance of the main thread. Sounds perfect, except it isn’t. Because of the heavy sandboxing of the Javascript environment using another thread only really works well if we have small inputs (small data, urls, other parameters) and reasonably small output. Because all of the data going to and coming back from another thread is going to be serialized - blocking the main thread while that happens (unless you are using binary arrays that can be transferred).
If threading works for your application then this article isn’t going to be that much use for you. This article describes a technique that shares the main thread, so it isn’t driving home multi-cpu advantages but it is providing for a seamless user experience by utilising every ounce of the main thread without blocking high priority updates.
How it works
Ok so lets dive into how you can process vast amounts of data that takes seconds to execute without interrupting the main thread animations etc.
It boils down to coroutines.
Coroutines
You are most likely already aware of coroutines in one form or another. A coroutine is basically a thread of programming logic that is working its way to completion at the same time as other things are doing the same.
A thread is a kind of coroutine, but normally we differentiate them. A coroutine is therefore another logical processing state machine in your main thread. We see them all the time as Promise execution chains and async
functions with await
.
Anyone familiar with the Unity game engine will know it uses Coroutines in many cases to create fire and forget logic that updates every game frame to control a bullet, an AI or whatever. We will see how we can use it here in a very similar way.
We can have multiple promises waiting the next step of operation at any time and they will resume when their entrance criteria are met - execute in a blocking fashion until they either return or wait for the next step.
Typically those operations are awaiting the result of something on another process, server or thread. You may (like me) have had occasion to try to break up long running processes in an async function with:
await new Promise(resolve=>setTimeout(resolve))
The main loop has a list of things to do, the above line queues the resumption of this routine after the next time the loop executes.
Executing this code will result in your async function resuming the next time the main loop has finished its other available work. Giving the system time to do something else, like animate or resume another async function.
Smarter Coroutines
Ok so the above is a rough way to allow other processes to run. It’s a blunt instrument, we are giving up any more time this main loop and starting again next time. Do that inside the middle of a tight for loop and your code will take forever to run.
for(let i = 0; i < 1000; i++) {
await new Promise(resolve=>setTimeout(resolve))
}
Takes 16 seconds to run to completion. We can’t use this method easily and it gets worse:
const myRecords = JSON.parse(someMassiveString)
Might take 1 second to run, and so you’d get a glitch.
If we want this to work we need another way of writing coroutines that:
- Runs processes for a reasonable amount of time and then yields control to other things that might be waiting
- Composes well so we can write it without getting into intricate designs and hard to find bugs
- Can be used to construct simple versions of the common things that we “call” such as JSON functions, sorts etc
Using generators
So we want to do something like an await
but we want to carry on right now if we still have enough time before we’d glitch animation.
There is something like await
we can use for this, in fact before await
many of us used it to make Promise based code easier to read. That’s generator functions.
Most demos of generator functions show you a for next loop over Fibonacci numbers or something equally useful. But they are very powerful constructs. A generator function is syntactic sugar over the ability to make an iterator. A iterator is a class that has a next()
function that will run some code and return the next available value. Hence the Fibonacci examples.
So if we write a generator function and call it, it gives us something that we can get the next value from any time we like.
function * myGenerator() {
for(let i = 1; i < 1000; i++) {
yield i;
}
}
const iterator = myGenerator();
iterator.next(); // -> {value: 1, done: false}
iterator.next(); // -> {value: 2, done: false}
...
iterator.next(); // -> {value: 1000, done: true}
So now we need to stop worrying about the value being returned, and just use the side effect that the code is being run whenever we like. (Although in my implementation yielding true
will abandon more work on the current frame to allow control when Garbage Collection might happen)
We can run the next step of the code, see how much time we’ve used, if not too much then we can run another step. If we’ve used enough, we can defer to the next loop of the main thread.
How much time is left?
Browsers have a call requestIdleCallback()
that will call us when the main thread is idle and provide a parameter that can be used to enquire as to how much time is left before the next frame. Nice.
We can build a generator, call it repeatedly until there isn’t enough time left for any more, then just request another run next time the main thread is free.
This is polyfilled for unsupported browsers - so it will work all the way back down the chain.
The idle time coroutine runner
export async function run(
coroutine,
loopWhileMsRemains = 1,
timeout = 16 * 10
) {
const options = {timeout}
let terminated = false
let resolver = null
const result = new Promise(function (resolve, reject) {
resolver = resolve
const iterator = coroutine()
window.requestIdleCallback(run)
function run(api) {
if (terminated) {
iterator.return()
return
}
const minTime = Math.max(0.5, loopWhileMsRemains)
try {
do {
const {value, done} = iterator.next()
if (done) {
resolve(value)
return
}
if (value === true) {
break
}
} while (api.timeRemaining() > minTime)
} catch (e) {
reject(e)
return
}
window.requestIdleCallback(run, options)
}
})
result.terminate = function (result) {
terminated = true
if (resolver) {
resolver.resolve(result)
}
}
return result
}
That’s it. This version allows returning true to abandon the current frame and also provides the returned promise with a terminate(result)
method that can be used to stop early in case of re-entrancy.
When you call it, it returns a Promise that will resolve with the final return of the generator function. It will run in the idle time of the main thread, and yep, you can run more than one.
JSON et al
Ok so having built that we now need versions of the common “heavy” operations that we can use with a few yields in there.
Douglas Crockford’s JSON stringify is fine, although it does massive work on strings that need splitting up, so that got rewritten to be stringify
and stringifyAsync
in js-coroutines.
Parsing in Crockford’s code uses eval()
- not going to help as we can’t split that up so I used and optimised someone’s AST parser for JSON and stuck in some generators. Seems performant enough - given we have 60 fps animations.
A few obvious array operations are easy to implement with generators:
export function* reduce(array, fn, initial) {
let result = initial || array[0]
let index = 0
for (let item of array) {
result = yield* fn(result, item, index, array)
}
return result
}
You can see in here we are using yield*
that actually doesn’t yield it lets the whole statemachine hand off to a sub function which itself can yield back out to our .next()
. So yielding in these functions requires that the reduce function does it. To make that easy I wrote a yielding(fn)
call that makes a generator that yields every few iterations out of a standard function. If that sounds hard, it isn’t:
export function yielding(fn, frequency = 8) {
let yieldCount = 0
return function* (...params) {
let result = fn(...params)
if (yieldCount++ > frequency) {
yieldCount = 0;
yield
}
return result
}
}
The function returns a generator function that passes through its parameters and yields every frequency
loops.
You can now call a reduce like this:
yield* reduce(
results,
yielding((c, a) => c + a),
0
)
Making it Async
So being able to write your own generators is really nice, but much of the time we probably just want to do a big JSON parse or a sort. Bothering with generator syntax for that - when you aren’t working out how to split up your own deep processing functions - well it’s a bit of a chore.
In comes wrapAsPromise(generator)
which takes away the effort, wrapping a generator function in all the necessary boiler plate to initialise it and wait for the result. It returns a function that runs the process.
export function wrapAsPromise(coroutine) {
return async function (...params) {
return await run(function* () {
return yield* coroutine(...params)
})
}
}
Which means we can then just define an async JSON function (as I do in the library) like this:
export const parseAsync = wrapAsPromise(parse)
And we get async JSON in any async routine by just calling:
// Yay no lag
let obj = await parseAsync(json)
The Other Kind Of Coroutine
Imperatively controlling animation is nice. We can write a for next loop and just tell something where to go each frame. High priority coroutines can do this with generators just fine:
let multiplier = window.innerWidth / 300
return update(function* () {
while (true) {
for (let x = -200; x < 200; x++) {
logoRef.current.style.marginLeft = `${x * multiplier}px`
yield
}
for (let y = 0; y < 200; y++) {
logoRef.current.style.marginTop = `${y * multiplier}px`
yield
}
})
Here the update function uses a requestAnimationFrame()
to run, and yield does wait for the next frame.
export async function update(coroutine) {
let terminated = false
let resolver = null
const result = new Promise(function (resolve, reject) {
resolver = resolve
const iterator = coroutine()
window.requestAnimationFrame(run)
function run() {
if (terminated) {
iterator.return()
return
}
try {
const {value, done} = iterator.next()
if (done) {
resolve(value)
return
}
} catch (e) {
reject(e)
return
}
window.requestAnimationFrame(run)
}
})
result.terminate = function (result) {
terminated = true
if (resolver) {
resolver.resolve(result)
}
}
return result
}
Caveats
We can't account for GC hitting a frame here or there. You can try by writing routines that yield true
to allow more time for it.
Conclusion
It turns out it’s really not hard to completely split up work across multiple frames and maintain 60fps. I’ve got to thank Paolo and his excellent article where he messes with React Fiber to enable reparenting React components for giving me the inspiration to read some of his references - where suddenly seeing the requestIdleCallback()
gave me a eureka moment.
Frankly after years of struggling I can’t quite believe I can now write:
const records = await parseAsync(json)
await sortAsync(records, a=>a.lastName)
And not risk a massive glitch.
Other great NPM packages included Timsort (for the sorting) and (yastjson) as the starting point for a fast JSON parser that works as a coroutine.
The project home page has many more details and examples. The library is available on GitHub and via npm:
npm install js-coroutines
MIT (c) 2020 Mike Talbot et al
Thanks for reading.
MikeT
Top comments (22)
Ok I understand now. It’s all about the
requestIdleCallback
.What got me interested is because I have a repo at github.com/calvintwr/blitz-hermite...
It is a client-side JS image resizer, which is overall performant in context of downsizing huge client images (DSLR whatever just bring it), before transmitting it over to the server for storage. Time difference saved is from uploading the massive image to the server, and sending back a resized version to the client. Almost all online resizers/social media sites do this slower server-side resizing, I really wonder why.
In Blitz, I’m using a worker to do the resampling, but it glitches when the worker sends back the completed work, albeit just for a short while. I might think about how to Coroutines this.. hah.
Right well if you can encode it using a Uint8Array in the worker, there's a version of JSON parse that can handle them and you can use transferrable objects with Uint8Arrays that should be near-instant. Of course, you could do anything else you like with the array using a generator based coroutine.
Ahhh yes that can work. Although the next challenge is to part out your async json parser because I shouldn’t want to bring in the full js-coroutines library just to do that part.
I think you could probably just get away with the coroutine runner in this article and not bother with js-c, that's probably a few lines of code that will deal with your requirements.
Thank mate you are awesome!
I found out that
Uint8Array
is not a transferrable object, is it? developer.mozilla.org/en-US/docs/W...But
ArrayBuffer
is. And ImageData has property.data.buffer
that is ArrayBuffer, so i have used that to transfer between the main and worker thread: github.com/calvintwr/blitz-hermite...✌️👍
Off-topic, Another approach to handle resource-intensive calculation would be using worker thread instead of the main thread. developer.mozilla.org/en-US/docs/W.... Cool project btw 😄
Completely agree - I do mention it in the write up above. The issue is often the time taken to serialize and deserialize the workload to the other thread. If you are processing binary then it's totally fine and ideal to do it on a worker thread (because you can use transferrable objects). Structured cloning (to threads) used to be very broken, it's now much better but it's still pretty much only the same speed range as a serialize/deserialize to JSON - which can definitely cause frame loss glitches.
I've been wondering idly about adding threading to js-coroutines, but using it to create the packets so that it doesn't glitch on large payloads - thing is, it's pretty specialist to actually need that other thread, so not sure where the benefit cut off would be...
When your objects are so big you have to lz compress your json before sending it to the client then do segmented uncompresses so you don't crash the browser. Have to do it this way because objects have to be available when user loses connection.
Sounds interesting... Seems like you could do with an uncompress that also yields in that case?
@CaseyCole - I've added lz-string compression to the library more details here - not sure it will help in your case, but I'm gonna need it! Thanks for pointing me in this direction...
Wow, really cool!
Thanks :) Really can't believe I didn't think of it before! Seemed so obvious once I'd implemented it.
Hi Mike awesome work! I'm wondering if this can be applied to the backend. What I mean is if you're able to process arrays of a million records could this replace the event loop in node?
In node the event loop is not async, so a long running task can block. Would your generator technique allow more scaling by having http calls overlap and do work during main thread idle time?
You know I think it actually would yes with a modified form to only run for a particular period of time.
At the moment the polyfill does a requestAnimationFrame and then uses setTimeout to measure the amount of time remaining. On node I guess we'd just say "you are allowed 20ms" then skip to the next main loop.
I'll get to doing that.
Terrific Mike! Imagine you can get node to scale to 500 thousand or more? Currently node scaling is pretty poor. On techempower raw node is only 176190 concurrent requests sec (128th place).
techempower.com/benchmarks/#sectio...
I'm also a js dev so if you need help let me know.
Unjam your server: NodeJS collaborative multitasking
Mike Talbot ⭐ ・ Jul 13 '20
And I'd be delighted to get any help anyone would like to give. I'm sure that there are some other very useful functions that could be added!
I wrote a util almost exactly like this for Unity, to make sure VR games don't drop any frames: github.com/foolmoron/UnityUtils/bl...
Using requestIdleCallback to time the process is genius fit for JS though! Definitely gonna see where I can use this in my json handling.
Really great project, Mike!
Good work and thanks for sharing!
Another DoEvents in a more complex "modern" way.
Yes, with an understanding of how much time is left on the current frame rather than just pumping the message queue.
Thanks for building this, Mike. I was looking for such library from last 3-4 months.