DEV Community

Vsevolod
Vsevolod

Posted on • Updated on

πŸ¦– Gentle, promise-based HTTP client for Deno and Node.js (part 2)

GitHub deno.land/x npm

βœ… DENO RU COMMUNITY

PART 1 (please review the first part before reading this article <3)

Hello! It's Vsevolod again 😎 In the first part, I outlined the problem: when working with various APIs, we often encounter limits and restrictions that the standard fetch function cannot handle. During my contemplation, I arrived at a solution (source code), which involved creating a queue through which all fetch requests would pass at a certain interval. Yes, this resolves the problem described above, but it is highly inefficient...(

❀️ And please, support the project by starring it on GitHub or subscribing to my Telegram
channel IT NIGILIZM!
This motivates me to continue developing this topic, writing new articles, and improving this tool!


🫠 Mistakes analysis:

So, what's the plan? Let's tackle this! The solution from the first part is inefficient because we're making N requests at equal time intervals, waiting for each request to complete. Let's assume we need to make 3 requests per second, no more, right? Take a look at the code from the previous article:

while (true) {
  await delay(interval);
  const req = queue.shift();
  if (!req) continue;

  try {
    const response = await fetch(req.url);
    req.resolve(response);
  } catch (error) {
    req.resolve(error);
  }
}
Enter fullscreen mode Exit fullscreen mode

What could go wrong? Let's calc:

1st request ~300ms +
2nd request ~200ms +
3rd request ~250ms +
interval 1000ms = 1750ms!!!
Enter fullscreen mode Exit fullscreen mode

So, it means that it will take us a long time to complete the first three requests, and the next three requests will only start after 1750ms. But we wanted to make 3 requests per second. What can be done about this? The most obvious solution is not to send requests sequentially, waiting for responses, but to do it "in parallel".

✨ New Solution!

Alright, let's start by adding a few new fields to HTTPLimiter (hereinafter referred to as "Limiter"):

  1. requestsPerIteration - This field will serve as a counter, keeping track of the number of requests sent during one "iteration."
  2. iterationStartTime - This is the start time of the batch request sending process in milliseconds.
  3. loopIsWorking - A flag indicating whether the loop is active (I want to stop the request processing loop if the queue becomes empty). In IHTTPLimiterOptions, let's add rps: number (requests per second) for configuring the Limiter's loop.

Additionally, we will make some changes to the fetch function so that it triggers the request processing loop as needed:

  fetch(input: FetchInput, init?: RequestInit): Promise<Response> {
    const promise = new Promise((resolve, reject) => {
      this.#queue.push({
        input,
        init,
        resolve,
        reject,
      });
    });

    // Now we don’t have to run the loop β€œmanually”
    // and leave it running in the background forever
    if (!this.#loopIsWorking) {
      this.#loopIsWorking = !this.#loopIsWorking
      this.#loop();
    }

    return promise;
  }
Enter fullscreen mode Exit fullscreen mode

Great, now let's move on to the breakdown of the new loop. First and foremost, we need to remember the time when we started the loop and began sending requests. This is important because now we will make an honest effort to accurately count the number of requests per second:

async #loop() {
    this.#iterationStartTime = new Date().getTime();
Enter fullscreen mode Exit fullscreen mode

The first thing to do when entering the loop is to check the number of requests we have already made. If it exceeds the allowed limit per second, we skip the iteration:

    while (true) {
      if (this.#requestsPerIteration >= this.#options.rps) {
        await delay(this.#options.interval);
        continue;
      }
Enter fullscreen mode Exit fullscreen mode

Unlike the previous implementation, now fetch is executed asynchronously compared to other fetches in the queue. We don't wait for a response to the previous request to send the next one; instead, we simply increment the counter and decrement it when a response arrives:

      const entity = this.#queue.shift();

      if (entity) {
        this.#requestsPerIteration++;
        fetch(entity.input, entity.init)
          .then((response) => {
            entity.resolve(response);
          })
          .catch((error) => {
            entity.reject(error);
          })
          .finally(() => {
            this.#requestsPerIteration--;
          });
      }
Enter fullscreen mode Exit fullscreen mode

Next, I establish the termination rule for the loop, namely:
if the queue is empty, we don't expect any more responses to the sent requests:

      if (!entity && this.#requestsPerIteration == 0) {
        this.#loopIsWorking = !this.#loopIsWorking;
        return;
      }
Enter fullscreen mode Exit fullscreen mode

🀒 Here's a small workaround that prevents the request processing loop from blocking, which would otherwise hinder the handling of other promises:

      if (!entity && this.#iterationStartTime > 0) {
        await delay(0);
      }
Enter fullscreen mode Exit fullscreen mode

And the last part of the loop, controlling the intervals. If we hit the limit on the number of requests, we calculate the time we need to delay before starting the next iteration. It's not a perfect solution, but it works:

      if (this.#requestsPerIteration == this.#options.rps) {
        const timeOffset = this.#iterationStartTime + 1000 -
          new Date().getTime();
        await delay(timeOffset);
        // !!! there are no guarantees that sent requests will be executed after this delay
        this.#iterationStartTime = new Date().getTime();
      }
    }
  }
Enter fullscreen mode Exit fullscreen mode

Certainly, this is not a perfect solution: it can lead to many problems (for example, if the requests themselves have a very long delay) and errors. But now the loop operates much more efficiently than the approach in the first part.


🫢 In conclusion and for the future...

Thank you very much for reading the second part of the article! You can find the entire code on GitHub, and I've also created a port for nodejs, which is now available on NPM (all links below). I hope to create a good alternative to great modules like ky or axios, taking into account the limits and requirements of various APIs. I'll certainly appreciate any suggestions or pull requests.

GitHub deno.land/x npm

Top comments (4)

Collapse
 
elvisvoer profile image
Elvis Adomnica

I really enjoyed your articles @sevapp and it inspired me to write some sort of a "reply" article: erranto.com/blog/fetch-rate-limit
I hope you enjoy the read!

Collapse
 
sevapp profile image
Vsevolod

Thank you for your attention and reading. I didn't think anyone would be interested in this)

Collapse
 
sevapp profile image
Vsevolod

Would you mind contributing your corrections to the repository? I would welcome your input with suggested corrections?

Collapse
 
elvisvoer profile image
Elvis Adomnica

My suggestion still has a pretty big design issue but once I sort that one will for sure open a PR (even though it will be on older code, it could be a nice source of input)