DEV Community

Cover image for How to create a High Performant API Gateway Using Python
mobius-crypt
mobius-crypt

Posted on

How to create a High Performant API Gateway Using Python

How to create a High Performant API Gateway Using Python

How to create a High Performant API Gateway Using Python

In today's world, APIs have become an essential part of modern applications. They allow different services to communicate and exchange data with each other, enabling the creation of complex systems. However, managing API requests can be challenging, especially when dealing with multiple API servers. In this article, we will explore how to build a gateway server that can handle API requests and distribute them to different servers.

Introduction

A gateway server is a centralized server that receives requests from clients and distributes them to different API servers. The gateway server acts as an intermediary between the clients and the API servers, handling authentication, rate limiting, and caching of responses. The advantage of using a gateway server is that it simplifies the management of API requests and provides a single point of entry for clients.

In this article, we will focus on building a gateway server using Python and FastAPI. We will use asyncio to handle asynchronous requests and Redis for caching.

Building the Gateway Server

To build our gateway server, we will use FastAPI, a modern web framework for building APIs with Python. We will define an endpoint that receives API requests and distributes them to different API servers. We will also implement rate limiting, authentication, and caching of responses.

Our endpoint is defined as follows:


@app.get("/api/v1/{path:path}", include_in_schema=False)
@auth_and_rate_limit
async def v1_gateway(request: Request, path: str):
    """
    NOTE: In order for the gateway server to work properly it needs at least 2 GIG or RAM
        master router
    :param request:
    :param path:
    :return:
    """
Enter fullscreen mode Exit fullscreen mode

This endpoint receives requests with a path parameter, which is the URL of the API endpoint that the client wants to access. We use the auth_and_rate_limit decorator to enforce rate limiting and authentication.

Next, we check if the request contains an API key. If it does, we pass it to the create_take_credit_args function, which handles the authentication of the API key.

api_key: dict = request.query_params.get('api_key')
_path = f"/api/v1/{path}"
await create_take_credit_args(api_key=api_key, path=_path)

Enter fullscreen mode Exit fullscreen mode

Once authentication is complete, we create a list of API servers that we want to distribute the request to.

api_urls = [f'{api_server_url}/api/v1/{path}' for api_server_url in api_server_urls]

Enter fullscreen mode Exit fullscreen mode

We then check if the response for the request is cached in Redis. If it is, we return the cached response. Otherwise, we send the request to all API servers and wait for their responses.

tasks = [redis_cache.get(key=api_url, timeout=60*5) for api_url in api_urls]
cached_responses = await asyncio.gather(*tasks)

for i, response in enumerate(cached_responses):
    if response is not None:
        app_logger.info(msg=f"Found cached response from {api_urls[i]}")
        return JSONResponse(content=response, status_code=200, headers={"Content-Type": "application/json"})

try:

    # 5 minutes timeout on resource fetching from backend - some resources may take very long
    tasks = [requester(api_url=api_url, timeout=300) for api_url in api_urls]
    responses = await asyncio.gather(*tasks)

except asyncio.CancelledError:
    responses = []
except httpx.HTTPError as http_err:
    responses = []
Enter fullscreen mode Exit fullscreen mode

If we receive a response from any of the API servers, we cache it in Redis and return it to the client.


    app_logger.info(msg=f"Request Responses returned : {len(responses)}")
    for i, response in enumerate(responses):
        if response and response.get("status", False):
            api_url = api_urls[i]
            # NOTE, Cache is being set to a ttl of one hour here
            await redis_cache.set(key=api_url, value=response, ttl=60 * 60)
            app_logger.info(msg=f"Server Responded for this Resource {api_url}")
            return JSONResponse(content=response, status_code=200, headers={"Content-Type": "application/json"})
        else:
            # The reason for this algorithm is because sometimes the cron server is busy this way no matter
            # what happens a response is returned
            app_logger.warning(msg=f"""
            Server Failed To Respond - Or Data Not Found
                Original Request URL : {api_urls[i]}
                Actual Response : {response}          
            """)

    mess = "All API Servers failed to respond - Or there is no Data for the requested resource and parameters"
    app_logger.warning(msg=mess)
    # TODO - send Notifications to developers that the API Servers are down - or something requests coming up empty handed
    _time = datetime.datetime.now().isoformat(sep="-")

    # TODO - create Dev Message Types - Like Fatal Errors, and etc also create Priority Levels
    _args = dict(message_type="resource_not_found", request=request, api_key=api_key)
    await email_process.send_message_to_devs(**_args)
    return JSONResponse(content={"status": False, "message": mess}, status_code=404,
                        headers={"Content-Type": "application/json"})


Enter fullscreen mode Exit fullscreen mode

If no responses are received from any of the backend servers, the function logs a warning message indicating that all API servers failed to respond or that there is no data for the requested resource and parameters. It then sends a notification to the developers using the "email_process.send_message_to_devs" function and returns a JSON response with a status code of 404 and a "Content-Type" header of "application/json". The response contains a "status" field set to False and a "message" field with an error message.

if response and response.get("status", False):

Enter fullscreen mode Exit fullscreen mode

If a response is received from a backend server and it contains a "status" field set to True, the response is cached using the "redis_cache.set" function and returned immediately with a status code of 200 and a "Content-Type" header of "application/json". Otherwise, the function logs a warning message indicating that the server failed to respond or that the requested data was not found.

*Conclusion *

In conclusion, the v1 gateway API endpoint provides a robust and reliable way to retrieve data from multiple API servers.

It utilizes caching to reduce the load on the API servers and improve the response time for frequently requested resources.

It also implements rate limiting and authentication to ensure the security and stability of the system.

In the event that all API servers fail to respond or return no data, the system sends notifications to the developers and returns an appropriate error message to the client.

Overall, this API endpoint is a great example of how to design and implement a scalable and resilient API gateway for modern web applications.

The Above Gateway was implemented as a Gateway TO EOD Stock API which can be found on here.

and offers scalable solutions for your website and applications if you are looking for:

  1. Exchange Information
  2. Stock Tickers Data
  3. End of Day (EOD) Stock Data
  4. Fundamental Data
  5. Stock Options And Splits Data
  6. Financial News API
  7. Social Media Trend Data For Stocks
  8. Sentiment Analysis for News & Social Media

Latest comments (0)