DEV Community

zwx00
zwx00

Posted on

Async Axiom logging

Axiom Logger documentation for its python client does not provide any examples on how to log in async fashion. Both Axiom logger and the default python logger utility are blocking. And while streaming to stdout is not a big issue, blocking network responses might be (axiom-py utilizies raw urllib below the hood). Here's how we addressed this at Katalist.

We have a file called logging.py from where we import the logging handlers we're gonna be using with our logging config.

# main.py
# ... other imports
from katalist.logging import handlers

logging.basicConfig(
    handlers=handlers,
    force=True,
    level=logging.INFO,
    format="%(asctime)s [%(levelname)s] %(name)s: %(message)s",  # Define log message format.
    datefmt="%Y-%m-%d %H:%M:%S",  # Define the date format to include in log messages.
)

app = FastAPI()

# all the views...
Enter fullscreen mode Exit fullscreen mode

One important thing to note here: without setting force to True, the handler configuration was being ignored. Other than that, there's nothing special about this setup.

So how do we include the AxiomHandler from axiom-py which is by nature synchronous, among our handlers? The answer lies in a QueueHandler utility that python's logging module provides by default. This allows us to basically enqueue logged entries that are then consumed by another thread, started by QueueListener. Python's queue, a thread-safe queue implementation takes care of the communication between threads.

# logging.py
import queue
from logging import Handler, LogRecord, StreamHandler
from logging.handlers import QueueHandler, QueueListener

from axiom.logging import AxiomHandler, Client

from katalystai.conf import SETTINGS


handlers: list[Handler] = [StreamHandler()]

if SETTINGS.mode == "production":

    log_queue = queue.Queue()
    queue_handler = QueueHandler(log_queue)

    axiom_client = Client(token=SETTINGS.axiom_token)

    axiom_handler = AxiomHandler(
        client=axiom_client,
        dataset="katalist-backend",
    )

    QueueListener(log_queue, axiom_handler).start()
    handlers.append(queue_handler)


Enter fullscreen mode Exit fullscreen mode

This is already nice, but another issue remains. QueueHandler will strip off everything but the raw message to prevent pickling errors when passing messages to another thread. But it's 2024 and everyone knows structured logging is the name of the game. And Axiom itself is good for nothing if not for processing structured logs. Hence we need to modify how the QueueHandler processes our messages.

To achieve this we will subclass QueueHandler to keep the args parameter intact.

Here's the final code:


import queue
from logging import INFO, Handler, LogRecord, StreamHandler
from logging.handlers import QueueHandler, QueueListener

from axiom.logging import AxiomHandler, Client

from katalystai.conf import SETTINGS

handlers: list[Handler] = [StreamHandler()]


class KatalistQueueHandler(QueueHandler):
    def prepare(self, record: LogRecord):
        args = record.args
        super().prepare(record)
        record.args = args

        return record


if SETTINGS.mode == "production":

    log_queue = queue.Queue()
    queue_handler = KatalistQueueHandler(log_queue)

    axiom_client = Client(token=SETTINGS.axiom_token)

    axiom_handler = AxiomHandler(
        client=axiom_client,
        dataset="katalist-backend",
        level=INFO,
    )

    QueueListener(log_queue, axiom_handler).start()
    handlers.append(queue_handler)
Enter fullscreen mode Exit fullscreen mode

Now we can log structured data anywhere in our application without worrying about blocking network calls.

from logging import getLogger

logger = logging.getLogger(__name__)

logger.info("I am logged", { "myname": "robby bobby" })

Enter fullscreen mode Exit fullscreen mode

While this workaround works nicely, you now need to make sure to only log data types that can be pickled. In all honestly, I don't see why you'd log anything but dicts, strings and numbers, but who knows. You've been warned.

Top comments (0)