DEV Community

Cover image for Rate Limits and Fallbacks in Eden AI API Calls
Eden AI
Eden AI

Posted on • Originally published at edenai.co

Rate Limits and Fallbacks in Eden AI API Calls

Accessing and using external APIs for various tasks today is a common practice but sometimes these APIs come with rate limits which can hinder the flow of data and information.

This article explains how to effectively manage rate limits when calling Eden AI's API and configure fallbacks to keep your application working even when the provider's rate limits are reached.

What are Rate Limits?

Rate limits restrict the amount of API requests a user can make during a set period. They are in place to guarantee equitable use of resources, prevent misuse, and uphold quality of service for all users. The owner of the application defines the rules/policies that limit API access for individuals (and automated programs).

Eden AI Default Rate Limits
Eden AI sets limits on the usage of APIs that vary depending on the subscription you choose. These limitations aim to hinder unnecessary API consumption. The service offers three subscription plans, each with its own set of restrictions:

  1. Starter Plan: Allows you to make up to 60 API calls per minute.
  2. Personal Plan: Elevates the limit to a maximum of 300 API calls per minute.
  3. Professional Plan: Provides the highest level of access, allowing you to make up to 1000 API calls per minute.

Image description

If your application exceeds the rate limits set by Eden AI, you will receive an HTTP 429 Too Many Requests response.

How to Handle Providers' Rate Limit Exceedance?

As Eden AI serves as a platform aggregator for various APIs, there are instances where individual providers may enforce stricter rate limits, particularly for very popular APIs like OpenAI's ChatGPT or Google VertexAI. This surge in demand may cause constraints on rate limits.

Example of OpenAI rate limit error :

ProviderLimitationError : That model is currently overloaded with other requests.
Enter fullscreen mode Exit fullscreen mode

To ensure the uninterrupted operation of your application, even when rate limits are exceeded, you can take advantage of Eden AI's fallback parameters.

What is Fallback Mechanism?

A fallback mechanism is a strategy that your application employs in response to exceeding rate limits. Instead of leading to application failure, it helps in handling and managing rate limit constraints.

There are multiple methods to implement a fallback: you can introduce a delay for retrying requests or queue the requests in a first-in-first-out manner when they surpass the rate limit.

Fortunately, no implementation is needed as Eden AI provides a fallback_providers parameter to manage fallbacks.

How do Fallback Parameters work?

Using Eden AI, you can easily configure the fallback_providers parameter to handle situations when a provider encounters rate limits. Here's how it works:

  1. When making an API call, you specify the primary provider you want to use.
  2. Within the fallback_providers parameter, list up to five potential substitute providers for Eden AI to refer to if the main provider, defined within the "providers" parameter, runs into rate limit problems.

It's important to note that you should have only one provider in the providers parameter.

These alternative providers will be attempted in the order they are listed until a responsive provider is found. At this point, Eden AI will cease further attempts with fallbacks.

Here's an example of how you would specify these parameters in the endpoint chat :

import json
import requests
headers = {"Authorization": "Bearer 🔑 Your_API_Key"}
url ="https://api.edenai.run/v2/text/chat"
payload = {
     "providers": "openai",
     "text": "Hello i need your help ! ",
     "fallback_providers" : "google, replicate"       
   }
response = requests.post(url, json=payload, headers=headers)
result = json.loads(response.text)
Enter fullscreen mode Exit fullscreen mode

This approach ensures that your application can gracefully handle rate limit issues and seamlessly switch to backup providers when needed. It can also be adapted to handle rate limits for a single provider by specifying that provider multiple times in the fallback_providers parameter, as shown in the following example:

payload = {
     "providers": "openai",
     "text": "Hello i need your help ! ",
     "fallback_providers" : "openai, openai, openai, openai, openai"       
   }
Enter fullscreen mode Exit fullscreen mode

The code will repeatedly call the "OpenAI" provider and check for any failures before retrying.

Note that the fallback_providers is not implemented for the asynchronous endpoints.

Conclusion

Effectively managing rate limits in API calls is vital for application reliability. Eden AI has a fallback mechanism in place, enabling smooth transitions to alternative providers when rate limits are reached. This guarantees uninterrupted service during times of high demand, ensuring a consistent user experience!

Create your Account on Eden AI

Top comments (0)