Something I really like about FastAPI and Typer, both from the same author, Sebastian Ramirez, AKA Tiangolo, is the super-convenient dependency injection. In a recent project, I wasn't able to use FastAPI, so I decided to roll my own. In this post, I'll describe how. But first, let me show you how nice Tiangolo's libraries are to use:
Typer
Typer is for building Python command-line tools. Often you want to be able to call python scripts from the command line with extra arguments to do all manner of automation tasks. Python has a low-level method of fetching the values passed from the command-line using sys.arv
which contains a list of arguments:
python sys_test.py 123
import sys
print(f"Your number was {sys.argv[1]}")
This would print "Your number was 123"
But this gets a bit complex when you have more than one argument, and particularly with optional arguments as well. It also doesn't do anything to help you produce errors when expected args aren't provided.
Using argparse
Python does have a solution to this though, it comes with a higher-level library for building a command-line interface (allowing for optional arguments, help text, etc) using the argparser
module:
from argparse import ArgumentParser
def main(foo: str, bar: int):
print(foo)
print(bar)
if __name__ == "__main__":
parser = ArgumentParser(description="Function that does things and stuff")
parser.add_argument("foo", type=str, help="Required Foo")
parser.add_argument("--bar", type=int, help="Optional Bar", default=1)
args = parser.parse_args()
main(args.foo, args.bar)
As you can see, argparser works by having you write some code to prepare all of the arguments. And gives you an opportunity to define some documentation for these arguments as well. Running it the script using python argparser_example.py --help
flag looks like this:
usage: argparser_example.py [-h] [--bar BAR] foo
Function that does things and stuff
positional arguments:
foo Required Foo
optional arguments:
-h, --help show this help message and exit
--bar BAR Optional Bar
But, there are some drawbacks to this. Firstly, it's quite a lot of code just to accept a couple of arguments, I can never remember how to write these with argparser
, particularly how to deal with optional and default arguments. Secondly, you need to define all your arguments before calling the business function (main()
in this case), passing the arguments on. While this is fine for some use-cases, in others it's a bit annoying to maintain.
Fortunately there are several other libraries in python's package ecosystem, some of which are more convenient to use or more powerful. One such library I previously used is Google's Fire, while another that I use now, is Tiangolo's Typer
Using Typer
The Typer equivalent of the above code is substantially shorter:
import typer
def main(
foo: str=typer.Argument(..., help="Required Foo"),
bar: int=typer.Argument(1, help="Optional Bar")
):
""" Function that does things and stuff """
print(foo)
print(bar)
if __name__ == '__main__':
typer.run(main)
Now, instead of argument parsing having to live by itself, where it's easy to forget and end up with the wrong arguments or mismatched types or help text, instead, the CLI arguments are introspected from the function's arguments including the type hints, and injected by Typer! This is the kind of injection we want to replicate later. The benefit is the CLI arguments and documentation live alongside the function arguments, making it much easier and neater to maintain.
Running python typer_example.py --help
now generates the following:
Usage: c.py [OPTIONS] FOO [BAR]
Function that does things and stuff
Arguments:
FOO Required Foo [required]
[BAR] Optional Bar [default: 1]
Options:
--install-completion Install completion for the current shell.
--show-completion Show completion for the current shell, to copy it or
customize the installation.
--help Show this message and exit.
Useful to note that you can run --install-completion
to install shell autocomplete! One of the many things that Typer provide in addition to basic CLI argument handling.
It's worth noting at this point that Google's Fire library works in a similar way.
Fast API
Another example of this kind of dependency injection at work, is in FastAPI, a high-level HTTP server framework, which leverages a package called Pydantic, to automatically decode an API request's parameters, payload, and path variables, saving you from a lot of boilerplate
To compare, we can look at Flask, another popular HTTP server framework, that is often the starting point for new Python students
Using Flask
A simple Flask app to return a simple payload might look like this:
from flask import Flask, request, jsonify
app = Flask(__name__)
@app.route("/foo/<foo>", methods=["post"])
def handle_foo(foo):
payload = request.json
bar = payload.get("bar", 1)
return jsonify(foo=foo, bar=bar)
There's a few things going on here, first, foo
is taken from the path, so if I were to hit the server at /foo/hello
then the foo
argument would be hello
. This is nice as Flask handles this for us. The next thing is, it expects a json payload, and I will try to read the "bar" key from it, defaulting to the value 1. So basically I want the payload to look like:
{
"bar": 123
}
But, there's not much validation here. If I provide a string instead of an int, there wouldn't be an error. If I provided extra values, there's no error. While this is a trivial case to check, imagine you had tens of payload values (potentially even nested structures!) to check, and you didn't want to have to write that out in a big chain of if
loop.
Using FastAPI
FastAPI's approach is to define the data models up-front with Pydantic, and use these models in the arguments of the function. Not only does this allow FastAPI to automatically decode the value and provide it to you, but it also automatically generate documentation!
from pydantic import BaseModel, Field
from fastapi import FastAPI
app = FastAPI()
class Payload(BaseModel):
bar: int = Field(1, title="Optional Bar")
class RetModel(BaseModel):
foo: str = Field(..., title="Result Foo")
bar: int = Field(..., title="Result Bar")
@app.post("/foo/{foo}", response_model=RetModel)
async def handle_foo(foo: str, payload: Payload):
return RetModel(foo=foo, bar=payload.bar)
Here's there's more work pre-defining a bunch of models, but this has several benefits:
- Pydantic will be used to parse the data. If the incoming data is the wrong format, an error message will be generated that explains exactly what is wrong with it. Saves you from having to write a big data validation chunk of code
- It's automatically injected so you don't have to handle it
- Documentation can be automatically generated for you!
Even this tiny bit of code generates a fully-fledged API documentation available when you run the server, complete with descriptions, and correct variable types:
Rolling your own
So we've seen Tiangolo's excellent work with dependency injection, making use of Python's introspection capabilities for the code to look at your function's arguments at run-time and figure out what the function wants, and injecting them.
As you can see from not only FastAPI and Typer examples, but also Flask too, that the function is wrapped with a decorator which allows metaprogramming, something Python is fairly strong at. This decorator is what allows us to do this injection, and where we'll put the introspection code.
In this example, we're going to retro-fit Flask with FastAPI-type injection of payload models. This is actually not necessary in real-life because there are many libraries out there that do this or a similar thing already, but this is here for illustrative purposes.
To do this, we first start with our previous Flask example:
from flask import Flask, request, jsonify
app = Flask(__name__)
@app.route("/foo/<foo>", methods=["post"])
def handle_foo(foo):
payload = request.json
bar = payload.get("bar", 1)
return jsonify(foo=foo, bar=bar)
Next, we'll start creating a decorator that will do our injection job, the blank decorator that won't do anything looks like this:
from flask import Flask, request, jsonify
app = Flask(__name__)
def inject_pydantic_parse(func):
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
@app.route("/foo/<foo>", methods=["post"])
@inject_pydantic_parse
def handle_foo(foo):
payload = request.json
bar = payload.get("bar", 1)
return jsonify(foo=foo, bar=bar)
For now, this does nothing at all, it passes straight through, but now that the function is being decorated by wrapper
, we can start doing stuff to it. Our first job is to detect any Pydantic models in the function arguments. We can do this using get_type_hints()
from the typing
module, which lets us introspect our function (passed in as func
):
from typing import get_type_hints
from flask import Flask, request, jsonify
from pydantic import BaseModel, Field
app = Flask(__name__)
class Payload(BaseModel):
bar: int = Field(1, title="Optional Bar")
def inject_pydantic_parse(func):
def wrapper(*args, **kwargs):
for arg_name, arg_type in get_type_hints(func).items():
parse_raw = getattr(arg_type, "parse_raw", None)
if callable(parse_raw):
kwargs[arg_name] = parse_raw(request.data)
return func(*args, **kwargs)
return wrapper
@app.route("/foo/<foo>", methods=["post"])
@inject_pydantic_parse
def handle_foo(foo, payload: Payload):
return jsonify(foo=foo, bar=payload.bar)
Here, this loop is going over all the type hints in the function, and for each of them, it'll test to see if there is a parse_raw
callable, and call it with the request.data
, inserting the results into the keyword arguments that will be used to call the function later. This is the "injection"!
for arg_name, arg_type in get_type_hints(func).items():
parse_raw = getattr(arg_type, "parse_raw", None)
if callable(parse_raw):
kwargs[arg_name] = parse_raw(request.data)
Finally, since this wrapper can handle the return value as well, we can also deal with that in a similar way, and collect the return value as a pydantic model. To do this we need an extra layer of wrapping due to the way python handles decorator arguments. There's actually neater ways to do this with some built-in libraries, but I'll show it here without:
from typing import get_type_hints
from flask import Flask, request
from pydantic import BaseModel, Field
app = Flask(__name__)
class Payload(BaseModel):
bar: int = Field(1, title="Optional Bar")
class RetModel(BaseModel):
foo: str = Field(..., title="Result Foo")
bar: int = Field(..., title="Result Bar")
def inject_pydantic_parse(response_model):
def wrap(func):
def wrapped(*args, **kwargs):
for arg_name, arg_type in get_type_hints(func).items():
parse_raw = getattr(arg_type, "parse_raw", None)
if callable(parse_raw):
kwargs[arg_name] = parse_raw(request.data)
retval = func(*args, **kwargs)
if isinstance(retval, response_model):
return retval.dict()
return retval
return wrapped
return wrap
@app.route("/foo/<foo>", methods=["post"])
@inject_pydantic_parse(response_model=RetModel)
def handle_foo(foo, payload: Payload):
return RetModel(foo=foo, bar=payload.bar)
Now, by annotating the handle_foo()
function with @inject_pydantic_parse(response_model=RetModel)
the wrapper will check if the return object from handle_foo
was indeed the desired respones_model, and then decode this (in this case using the dict()
method that pydantic models have, which the rest of Flask will handle turning into JSON for us). Doing a detection for the nominated response model is safe as it doesn't interfere with other possible return values from the function.
While this seems like a lot of extra code, this inject_pydantic_parse()
function can be tidied away in a utility module, and now you can decorate all of your Flask endpoints in FastAPI style...
...though, not really. This is a super-simple example and doesn't handle a lot of edge-cases. To use this for real, you'd need to add in more error handling, and perhaps combine both decorators so you only needed one. You might as well just switch to FastAPI if you had the option! (I did not, as I was using Google Cloud Functions, and so this option helped me a great deal)
Top comments (0)