Introduction
AWS Lambda functions with large dependencies can suffer from significant cold start times due to the time it takes to import these dependencies. In this post, we'll explore a simple yet effective way to reduce Python cold start times without changing a line of code by precompiling dependencies into bytecode. We'll cover the following topics:
- Understanding Python bytecode and
*.pyc
files - Precompiling dependencies to reduce init duration
- Using Python optimization levels for further improvements
- Reducing memory overhead with
PYTHONNODEBUGRANGES=1
Understanding Python Bytecode and *.pyc
Files
When Python loads source code for the first time, it compiles it to bytecode and saves it to *.pyc
files in __pycache__
directories. On subsequent loads, Python will use these precompiled *.pyc
files instead of recompiling the source code, saving time.
By precompiling dependencies and removing the original *.py
files, we can bypass the bytecode compilation step during function initialization. This can significantly reduce init duration. For example, a simple handler with import numpy as np
can see an init duration reduction of approximately 20%.
Removing *.py
files affects the detail in tracebacks when exceptions occur. With *.py
files, tracebacks include the relevant source code lines. Without *.py
files, tracebacks only display line numbers, requiring you to refer to your version-controlled source code for debugging. For custom code not in version control, consider keeping the relevant *.py
files to aid in debugging. For third-party packages, removing *.py
files can improve cold start times at the cost of slightly less detailed tracebacks.
Benefits
- Faster imports during runtime
- Reduced package size by removing
*.py
files - Same
*.pyc
files work on any OS
How to
precompile dependencies and remove *.py
files:
$ python -m compileall -b .
$ find . -name "*.py" -delete
Caution
Always test your code after precompilation, as some packages do rely on the presence of *.py
files.
Using Python Optimization Levels
Python offers optimization levels that can further improve init and runtime duration by removing debug statements and docstrings.
Benefits
Optimization Level | Effect |
---|---|
-O | Removes assert statements and code blocks that rely on __debug__
|
-OO | Removes assert statements, __debug__ code blocks, and docstrings |
How to
precompile with optimization level 2:
$ python -m compileall -o 2 -b .
$ find . -name "*.py" -delete
Caution
Test your code thoroughly, as optimization levels may introduce subtle bugs if your business logic relies on assert statements or docstrings.
Reducing Memory Overhead with PYTHONNODEBUGRANGES=1
In Python 3.11+, you can use the PYTHONNODEBUGRANGES=1
environment variable to disable the inclusion of column numbers in tracebacks. This reduces memory overhead but sacrifices the ability to pinpoint the exact location of exceptions on a given line of code.
Example traceback with debug ranges:
Traceback (most recent call last):
File "hello.py", line 1, in <module>
print(f"Hello world! {1/0}")
~^~
ZeroDivisionError: division by zero
Example traceback without debug ranges:
Traceback (most recent call last):
File "hello.py", line 1, in <module>
print(f"Hello world! {1/0}")
ZeroDivisionError: division by zero
Benefits
- Reduced memory overhead
- Reduced package sizes
How to
precompile with optimization level 2 and no debug ranges:
$ PYTHONNODEBUGRANGES=1 python -m compileall -o 2 -b .
$ find . -name "*.py" -delete
Summary
By precompiling dependencies, using optimization levels, and disabling debug ranges, you can significantly reduce cold start times in your AWS Lambda Python functions. These techniques can lead to over 20% faster startup times, allowing your functions to respond more quickly to events. Try these optimizations in your own functions and see the performance improvements for yourself!
Top comments (1)
Short but useful! Nicely done.