When thinking about serverless applications, one thing that comes to mind immediately is efficiency. Running code that gets the job done as swiftly and efficiently as possible means you spend less money, which means good coding practices suddenly directly impact your bottom line. How does logging play into this, though? Every logging action your application takes is within the scope of that same performance evaluation. Logging processes can also be optimized just like for any process your code spins up. In this series of posts, let's dive into how you can think about logging in a serverless world.
Part 1: Performance, Episode 1
I know that one of the benefits of serverless architectures is not managing your own hardware, but you still need to consider the hardware your code requires when thinking about managing cost. Most serverless providers charge by some combination of the amount of memory and unit of time needed by your function or application. So, in a significantly simplified pricing model on two offerings each just for argument's sake, AWS charges per request and byte-second of memory (Lambda) or per vCPU byte-second and byte-second of memory (Fargate) whereas GCP charges per vCPU byte-second (Cloud Run) or per invocation (Cloud Function). There's a lot more involved in pricing structures for these as-a-Service systems, including the cost for network ingress or egress, storage, data stores, and analytics. However, we're just going to examine the actual compute power here when discussing a deep dive on logging.
Knowing how much compute power you'll need for your code drives the decision for different tiers, and therefore performance equals money. Your choices around how to log data can affect those performance metrics. Let's start with examining the first time your code spins up.
Cold Start Considerations
One of the biggest concerns with serverless performance and efficiency is the cold start, or the first spin-up of an instance when there's no warm (recent) instance available. During the start of a serverless system, the service brings a container online, and then the container starts running your specific system. At the point of the container starting to run your unique instance, the meter starts running. The first thing the container does on a cold start is a bootstrapping process where it installs any dependencies you expect to need. If you've ever watched a Docker container image building, you know that this process can run out on the scale of minutes. Then the container's environment is set, and your code begins to run. On a warm start, on the other hand, the environment comes preset, and your code begins running immediately.
As you might guess, the bootstrapping process in a cold start can be expensive. This need for efficient setup is really where dependency management becomes a not-so-secret weapon for lowering the cost of a serverless system. Your logging library of choice is no exception. Since setting up dependencies does take compute time, you have to factor in that time in your performance calculations. As a result, the ideal state for logging in a serverless architecture is to use built-in methods wherever possible. That means using Python's built-in logging library, for example, to reduce calls to PyPI. If you can't use a built-in library, such as if you're running a NodeJS function and need more than just console.log()
, the best logging library for you is the one that balances the features you want with the least number of dependencies needed to make those features happen.
Construction
You should examine how you are constructing logging messages and objects. If there is upfront processing going on when constructing a message or an object, consider offloading that processing time to see if you can reduce usage. To go back to Python as an example, consider this simple benchmark set1, run from Python 3.7.4:
Method | With 1 String (microsec) | With String and Integer (microsec) | With Multiple Inputs (microsec) |
---|---|---|---|
%-format | 19.376829 | 19.629766 | 21.335113 |
%-format, direct call | 20.347689 | 19.808525 | 20.626333 |
str.format() | 20.324141 | 20.329610 | 21.905827 |
f-string | 19.461603 | 19.875766 | 20.073513 |
concatenation | 18.930094 | 19.837633 | 24.505294 |
direct log, as variable | 21.637833 | -- | -- |
direct log, direct insertion | 21.816895 | -- | -- |
direct log, as list | -- | 22.872546 | 23.445436 |
These timeit
results came from using different string generation methods and putting them into a logger to drop a human-readable message into the logs, adding complexity as we went. The methods in the benchmark are
- the original %-formatting method generating a string and then passing that to the logger,
- a direct method passing a format string with %-formatting and then the variables to the logger,
- the
str.format()
method generating a string and then passing that to the logger, - the f-string formatting method generating a string and then passing that to the logger,
- a generic string concatenation method generating a string and then passing that to the logger,
- direct pass of a variable pointing to a string to the logger to generate the message internally,
- direct pass of a string to the logger to generate the message internally, and
- direct pass of a list to the logger to add to the message at the end.
As we add more integers and strings, the complexity of each call rises, and we start to encounter the different costs associated with each string conversion call. The concatenation method, for example, requires a call to the str()
method to convert the integer into a string so it can be concatenated. You’ll notice that, depending on need, different methods are more efficient than others. Under the hood, Python’s logging library uses %-formatting, the oldest method, to ensure backwards compatibility.
We’ll spend some more time in a different series digging into Python’s logging library to understand better benchmarks, but this example should give you a fairly good sense of why you need to consider how you construct your logging messages if you’re going text-based. These numbers may not seem like much, but, in reality, you probably are making hundreds of calls to your logger on a larger instance than this. The difference between the fastest method and the slowest method when you have multiple inputs is on the order of 4.5 microseconds, and this message generation example is extremely simple in comparison to what you typically need for logging in a serverless application. When milliseconds count, this kind of consideration is a priority.
Next time
Once you've considered how to improve the performance of startup and basic text-based messages in your logging setup for serverless, you need to start thinking about crafting logging objects to transmit over those networks in the smallest possible package as fast as possible. In the next post, we're going to dive into logging objects versus strings; and then in subsequent posts, we'll start thinking about memory allocation, concurrency and state, and security.
-
If you want to test these benchmarks yourself on your own system, you can find the code at https://github.com/nimbinatus/benchmarking-logs. The repo is a work in progress, so this code may be adjusted when you read this. ↩
Top comments (0)