DEV Community

Aurélien Delogu
Aurélien Delogu

Posted on

Code snippet: A performant async queue

Here's a queue that handles async functions in order with a wait time between each callback. You can call it from anywhere, all tasks will stay in order and won't overlap each other. It can be used for rate limiting, for example.

There's also an NPM package that has a more complete API: dumb-queue.

Latest comments (7)

Collapse
 
peerreynders profile image
peerreynders • Edited

Array.prototype.shift() is useful but rarely "performant" in terms of CPU.

A common tactic is to use two arrays:

  • one for accepting new items (queue)
  • one for consuming items (dequeue)
    .

  • The dequeue operation simply iterates through its current array without ever removing anything.

  • When all the items in the dequeue array have been consumed it quickly truncates the array with array.length = 0. Then both arrays are swapped and dequeue starts over while queue has an empty array to push new items onto.

PS: In your particular use case I doubt it matters but I'm a bit puzzled how other implementations were "performing poorly".

Also the promise seems to exist just for the sake of the of the async/await syntax. Simply scheduling a macrotask should work just fine but maybe I'm missing something.

Collapse
 
pyrsmk profile image
Aurélien Delogu

Indeed ! That's the only thing that was bothering me because the time complexity of the dequeue is O(n). The solution you're proposing would optimize this.

Even if I don't really like the unclean side of setting the array length to 0 arbitrarily, it is probably more performant to only resize an existing array than creating a new one on the heap.

Thanks for the cool advice !

Collapse
 
peerreynders profile image
peerreynders

Example codesandbox

Just change the following line to change the implementation :

// in `src/index.ts`
const queue = queue1;
Enter fullscreen mode Exit fullscreen mode
Thread Thread
 
pyrsmk profile image
Aurélien Delogu

Thanks for the code! I appreciate it, and it will save me a lot of time as it would have been the first time that I implement this kind of algorithm. I will refactor the library and will update the snippet soon according to your implementation.

Thread Thread
 
peerreynders profile image
peerreynders

Is it actually faster for your use case though?

That scenario was limited to 350 effects to reliably stay under the 5 second timeout and if there is a difference it's likely only around 10 ms.

At this point I don't know how many thousands of queued effects you would need to see a benefit.

Thread Thread
 
pyrsmk profile image
Aurélien Delogu • Edited

This was for an old use case so I need to create benchmarks to see how both solutions behave.

Collapse
 
pyrsmk profile image
Aurélien Delogu • Edited

For the little story, at the time when I wrote this code I needed a performant rate limiter library to handle dozens of thousands running callbacks. But they were all bloated, or buggy, or had poor performance. So I came up with this tiny cuttie.