DEV Community

Cover image for Getting started with Flask and Cerberus - Building a Chess Analysis App (Part 2)
propelauthblog for PropelAuth

Posted on • Originally published at blog.propelauth.com

Getting started with Flask and Cerberus - Building a Chess Analysis App (Part 2)

In our last post, we learned about chess analysis. We then created a python function analyze_position that takes in a chess position and outputs a detailed analysis.

In this post, we'll create an API around that function, so our users can submit positions for us to analyze. We'll use Flask as our web server and Cerberus to validate the input.

Setting up Flask

Recall that last time we set up our project like this:

$ mkdir chess-api
$ python3 -m venv venv
$ source venv/bin/activate
(venv) $ pip install python-chess
Enter fullscreen mode Exit fullscreen mode

We're using Flask as our web server because it is very lightweight, so we'll want to install that next.

(venv) $ pip install Flask
Enter fullscreen mode Exit fullscreen mode

The quickstart guide shows how easy it is to set up a route.

# app.py
from flask import Flask

app = Flask(__name__)

@app.route("/")
def hello_world():
    return "<p>Hello, World!</p>"
Enter fullscreen mode Exit fullscreen mode

Which we can run and test with curl

# flask run automatically looks for app.py
(venv) $ flask run

# different terminal, flasks default port is 5000
$ curl localhost:5000
<p>Hello, World!</p>
Enter fullscreen mode Exit fullscreen mode

You can also return a dictionary which is returned as JSON.

# app.py

@app.route("/json")
def hello_world_json():
    return {"hello": "world"}
Enter fullscreen mode Exit fullscreen mode
$ curl -v localhost:5000/json
...
< HTTP/1.0 200 OK
< Content-Type: application/json
< Content-Length: 18
...
{"hello":"world"}
...
Enter fullscreen mode Exit fullscreen mode

Validating JSON body in Flask with Cerberus

We want to make sure that only valid requests are allowed. Cerberus is a lightweight validation library for python. We can define our expected schema and then make sure requests conform to it.

# A FEN is a standard way of representing a chess board
# Make sure that both the FEN string is valid, 
#   and the resulting board is valid
def is_valid_fen(field, value, error):
    try:
        board = chess.Board(fen=value)
        if not board.is_valid():
            error(field, "Invalid FEN")
    except ValueError:
        error(field, "Invalid FEN")


SCHEMA = {
    "fen": {"type": "string", "required": True, "check_with": is_valid_fen},
    "num_moves_to_return": {"type": "integer", "min": 1, "max": 10, "default": 1},
    "time_limit": {"type": "number", "min": 0.1, "max": 3600, "default": 60},
    "depth_limt": {"type": "integer", "min": 5, "max": 50}
}
Enter fullscreen mode Exit fullscreen mode

The schema itself is pretty straightforward. We have 4 fields, 3 of which are optional integers/numbers. For the fen field, we want to make sure it's both a string and a valid FEN, so we need to define a custom validation function.

When we are ready, we can check that a dictionary is valid like this:

v = Validator(SCHEMA)
is_valid = v.validate(some_json_request)
Enter fullscreen mode Exit fullscreen mode

and if it isn't valid, v.errors contains exactly what's wrong with it. v.normalized(some_json_request) modifies it's input which we will use to fill in defaults for the optional fields.

Let's put this all together in a new file parser.py

from cerberus import Validator
from flask import abort, jsonify, make_response

# ... schema from above

def parse_request(json_body):
    v = Validator(SCHEMA)

    # If invalid, exit out with a 400
    if not v.validate(json_body):
        # abort exits early with an HTTP response
        abort(make_response(jsonify(v.errors), 400))

    # Fill in defaults
    return v.normalized(json_body)
Enter fullscreen mode Exit fullscreen mode

We can hook this up to a new route in our app.py. For now, let's return the parsed request.

from parser import parse_request

# ...

@app.route("/analyze", methods=["POST"])
def analyze():
    return parse_request(request.get_json())
Enter fullscreen mode Exit fullscreen mode

After running our new app, we can test it with curl. Here's an example where we only specify the FEN and the
other fields get their defaults filled in.

$ curl -X POST 
       -H "Content-Type: application/json" 
       -d '{"fen": "8/8/6P1/4R3/8/6k1/2r5/6K1 b - - 0 1"}' localhost:5000/analyze
{"fen":"8/8/6P1/4R3/8/6k1/2r5/6K1 b - - 0 1","num_moves_to_return":1,"time_limit":60}
Enter fullscreen mode Exit fullscreen mode

And here's what happens with an invalid FEN.

$ curl -v -X POST 
       -H "Content-Type: application/json" 
       -d '{"fen": "nonsense"}' localhost:5000/analyze
...
< HTTP/1.0 400 BAD REQUEST
...
{"fen": ["Invalid FEN"]}
Enter fullscreen mode Exit fullscreen mode

Analyzing each request synchronously

We already have our analyze_position from the last post... what if we just use that directly in our route?

@app.route("/analyze", methods=["POST"])
def analyze():
    parsed_request = parse_request(request.get_json())
    analysis = analyze_position(fen=parsed_request.get("fen"),
                                num_moves_to_return=parsed_request.get("num_moves_to_return"),
                                depth_limit=parsed_request.get("depth_limit"),
                                time_limit=parsed_request.get("time_limit"))
    return {"analysis": analysis}
Enter fullscreen mode Exit fullscreen mode

Let's test it with our mate in 2 example from before:

$ curl -X POST 
       -H "Content-Type: application/json" 
       -d '{"fen": "8/8/6P1/4R3/8/6k1/2r5/6K1 b - - 0 1"}' localhost:5000/analyze
{"analysis":[{"centipawn_score":null,"mate_score":-2,"pv":["c2c1","e5e1","c1e1"]}]}
Enter fullscreen mode Exit fullscreen mode

It works! It's not a terrible v0, but it has an obvious flaw. Analyzing requests can be a very CPU intensive task. Someone could request an hour-long depth 50 search and our webserver will spend a lot of time working on that. Not to mention that the request will likely time out meaning all the work we do is irrelevant. If a bunch of people request similar analyses, we'll quickly get overloaded.

In our next post, we'll convert this API into an asynchronous API to solve these issues. See you then!

Top comments (0)