DEV Community

Anders Krøyer for IT Minds

Posted on

External API integrations from a microservice architecture

I recently finished working for a client where I was a part of the "Integrations squad". We rendered it possible for the products of other squads to be able to access data from third party APIs where we handled the authentication in both the back- and frontend.

We maintained integrations for a handful of third party systems, and in this blogpost, I will try to highlight one of the challenges we faced.

System setup

A heavily simplified explanation of our setup would be what you see here:
Simplified setup

In practice, the setup we used was way more complicated than this one, but those details do not add value to this post. What is worth noting, though, is that our backend was deployed as a microservice. This meant that our NodeJS APIs were running in multiple instances on multiple machines and were accessed through an API gateway.

Microservices have pros and cons. It's always faster and easier to build a monolith, but microservices can be more resilient and are often easier to scale.

If APIs are built to be stateless (which they should), deploying them on microservices is no problem since any server can handle any request because there is no session related dependency.

This is all well and good until your product owner innocently asks the team during a meeting: "We are adhering to our partners fair use policies right?". And the answer was... maybe?

Rate limits

Almost all big public APIs have some kind of fair use policy/rate limit. For instance, Facebook enables you to do 200 requests per hour per active user. Most APIs have a much simpler model, with some using 50.000 requests per day per user or 200 requests per minute per user.

While the development team knew that we did not have the traffic required to hit the third party API's rate limits, we had not implemented any controls which ensured that the rate limits were not broken. So we all agreed that we should prioritize this during our next sprint.

We mapped out the rate limits for all the APIs that we integrated with and decided to try taking the simplest one when doing a proof-of-concept which could then be used for the other APIs.

The API we picked for our test had a limit of 4 requests per second per user. This meant that if a service made 5 requests for the same users data at the same time, there was potential for us to get an error for one the the requests.

Minimal homemade solution

If all our APIs had shared memory access, you could do something like the pseudocode below:

const userRequests = {};

const handleRequest = (req) => {
    const currentActiveRequests = userRequests[req.UserId];
    if(currentActiveRequests >= 4) {
        sleep(1000);
        handleRequest(req);
    } else {
        userRequests[req.UserId] = currentActiveRequests++;
        const result = processRequest(req);
        userRequests[req.UserId] = currentActiveRequests--;
        return result;
    }
} 
Enter fullscreen mode Exit fullscreen mode

The above code is obviously not something you would actually use, but you get the idea of what is needed.

Using a library

In general, you should avoid reinventing the wheel, so we looked for libraries which could solve our problem. We found BottleneckJS and decided to try to use it for our initial prototype.

To support the prototype, we took the part of our infrastructure which sent the actual third party requests and scaled that into a single instance so we could run Bottleneck in in-memory mode to verify that it could solve our problem.

import Bottleneck from "bottleneck";

const limiter = new Bottleneck({
    maxConcurrent: 4,
    minTime: 250,
    datastore: "local"
  });

const handleRequest = (req) => {
    const result = await limiter.schedule(() => await processRequest(req));
    return result;
}
Enter fullscreen mode Exit fullscreen mode

The code above ran without issues and worked correctly, but the obvious disadvantage is that it only works if we have a single instance of our application running.

Using Redis instead of in-memory store

In order to avoid having a single point of failure and allowing our service to scale horizontally by adding more instances, we decided the next logical step would be to add a Redis cluster. Bottleneck.JS has built-in support for Redis, so we have to make very few changes to our code in order to start using Redis and enable our service to run in multiple instances.

import Bottleneck from "bottleneck";

const limiter = new Bottleneck({
    maxConcurrent: 4,
    minTime: 250,
    datastore: "Redis",
    clearDatastore: false,
    clientOptions: {
       host: "127.0.0.1",
       port: 1337
    }
  });

const handleRequest = (req) => {
    const result = await limiter.schedule(() => await processRequest(req));
    return result;
}
Enter fullscreen mode Exit fullscreen mode

And that's a brief telling of how we supported rate limits in our microservices. If you have any questions or feel like other solutions might have been better - write a comment!

Top comments (1)

Collapse
 
imthedeveloper profile image
ImTheDeveloper

Just found this as we have a similar setup we are working through. To make matters more complex this is also multi-tenanted and those third party APIs are called based on the api keys provided by customers. This means each rate limit is per key so there's a few more tenant id type limits we need to follow.