DEV Community

Carlos Armando Marcano Vargas
Carlos Armando Marcano Vargas

Posted on • Originally published at carlosmv.hashnode.dev on

Dockerazing a Robyn App with Postgres as Database | Compose

In this article, we are going to learn how to Dockerize a Robyn app that performs CRUD operations with Postgres as a database, Dockerize them and run them with Compose.

Robyn

Robyn is a fast async Python web framework coupled with a web server written in Rust.

Psycopg2

Psycopg is the most popular PostgreSQL database adapter for the Python programming language. Its main features are the complete implementation of the Python DB API 2.0 specification and the thread safety (several threads can share the same connection). It was designed for heavily multi-threaded applications that create and destroy lots of cursors and make a large number of concurrent INSERTs or UPDATEs.

What is Compose?

According to the Docker documentation:

Docker Compose is a tool that was developed to help define and share multi-container applications. With Compose, we can create a YAML file to define the services and with a single command, can spin everything up or tear it all down.

Compose is the tool that will allow to our server communicate with the Postgres instance. It will create a container for both, the web app and database instance, but the web app will be able to access the database.

Requirements

  • Python knowledge

  • Basic SQL knowledge

  • Docker installed

  • Python installed

Project Structure

robyn-app/
    app.py
    controllers.py
    helpers.py
    init_db.py
    requirements.txt
    dockerfile
    docker-compose.yml

Enter fullscreen mode Exit fullscreen mode

Building the app

For building the app, we need to create a virtual environment and installed Robyn.

We create a folder for our project, and inside the project's folder, we run the following commands with our console to create a virtual environment:

#Windows users
py -m venv venv
cd venv/Scripts
./activate

#Linux
python3 -m venv venv
source venv/bin/activate

Enter fullscreen mode Exit fullscreen mode

Installation

pip install robyn psycopg2-binary python-dotenv

Enter fullscreen mode Exit fullscreen mode

After Robyn and the other packages are installed, we run pip freeze > requirements.txt command, to create a requirements.txt file.

requirements.txt

The requirements.txt file should look like this:

dill==0.3.6
multiprocess==0.70.14
nestd==0.3.1
psycopg2-binary==2.9.6
python-dotenv==1.0.0
robyn==0.33.0
watchdog==2.2.1

Enter fullscreen mode Exit fullscreen mode

Creating a table

We have to create a database.

On our command line, we run the following command:

CREATE DATABASE robyn_db;

Enter fullscreen mode Exit fullscreen mode

init_db.py

import os
import psycopg2
from dotenv import load_dotenv

load_dotenv()
USER = os.getenv('USER')
PASSWORD = os.getenv('PASSWORD')

def get_db_connection():
    conn = psycopg2.connect(
        dbname = "robyn_db",
        user = "postgres",
        password = PASSWORD
    )
    return conn

conn = get_db_connection()
cur = conn.cursor()

cur.execute('DROP TABLE IF EXISTS books;')
cur.execute('CREATE TABLE books (id serial PRIMARY KEY,'
                                 'title varchar (150) NOT NULL,'
                                 'author varchar (50) NOT NULL,'
                                 'date_added date DEFAULT CURRENT_TIMESTAMP);'
                                 )

Enter fullscreen mode Exit fullscreen mode

Inside the init_db.py file, we load our environment variables to get access to Postgres. Then, we initialize a cursor to perform database operations. And create a table named books.

Inserting data into the table

init_db.py

cur.execute('INSERT INTO books (title, author)'
            'VALUES (%s, %s)',
            ('A Tale of Two Cities',
             'Charles Dickens')
            )

cur.execute('INSERT INTO books (title, author)'
            'VALUES (%s, %s)',
            ('Anna Karenina',
             'Leo Tolstoy')
            )

conn.commit()

cur.close()
conn.close()

Enter fullscreen mode Exit fullscreen mode

The code above inserts data every time we start the server.

controllers.py

from init_db import get_db_connection

def all_books():
    conn = get_db_connection()
    cur = conn.cursor()
    cur.execute('SELECT * FROM books;')
    books = cur.fetchall()
    cur.close()
    conn.close()

    return books

Enter fullscreen mode Exit fullscreen mode

This function retrieves all the rows in the database.

app.py

We create a new file app.py to write our endpoints. We will start writing an endpoint to retrieve all the rows in the database.

from robyn import Robyn
from controllers import all_books

app = Robyn( __file__ )

@app.get("/books")
async def books():
    books = all_books()
    return {"status_code":200, "body": books, "type": "json"}

app.start(port=8000, url="0.0.0.0")

Enter fullscreen mode Exit fullscreen mode

The all_books() function retrieves all the rows in the database. But, it returns them as a list of tuples. We need the function to return a list of JSON.

[(1, 'A Tale of Two Cities', 'Charles Dickens', datetime.date(2023, 2, 22)), (2, 'Anna Karenina', 'Leo Tolstoy', datetime.date(2023, 2, 22))]

Enter fullscreen mode Exit fullscreen mode

We have to create a file with helpers, to transform data into dictionaries so the endpoints can return data as JSON.

helpers.py

def to_dict(psycopg_tuple:tuple):
    book_dict = collections.OrderedDict()
    book_dict['id'] = psycopg_tuple[0]
    book_dict['title'] = psycopg_tuple[1]
    book_dict['author'] = psycopg_tuple[2]
    book_dict['datetime'] = psycopg_tuple[3].strftime("%m/%d/%Y")
    return book_dict

def list_dict(rows:list):

    row_list = []
    for row in rows:
        book_dict = to_dict(row)
        row_list.append(book_dict)

    return row_list

Enter fullscreen mode Exit fullscreen mode

The to_dict() function has a tuple as a parameter. And transforms it into an ordered dictionary, this way the position of the key-value pairs will not change.

The list_dict() function has a list as a parameter. We use it to convert a list of tuples to a list of dictionaries.

Controllers

In controllers.py we are going to write all the functions to perform CRUD operations.

All the records

def all_books():
    conn = get_db_connection()
    cur = conn.cursor()
    cur.execute('SELECT * FROM books;')
    books = list_dict(cur.fetchall())
    cur.close()
    conn.close()

    return books

Enter fullscreen mode Exit fullscreen mode

The all_books() function retrieves all the records in the database.

Creating a record

def new_book(title:str, author:str):
    conn = get_db_connection()
    cur = conn.cursor()
    cur.execute('INSERT INTO books (title, author)'
                    'VALUES (%s, %s) RETURNING *;',
                    (title, author))
    book = cur.fetchone()[:]
    book_dict = to_dict(book)
    conn.commit()
    cur.close()
    conn.close()

    return json.dumps(book_dict)

Enter fullscreen mode Exit fullscreen mode

The new_book() function has title and author as parameters and insert the values into the database. Then retrieves the last row added, convert it to a dictionary and returns it as JSON.

Retrieving by ID

def book_by_id(id:int):
    conn = get_db_connection()
    cur = conn.cursor()

    try:
        cur.execute('SELECT * FROM books WHERE id=%s', (id))
        book = cur.fetchone()
        book_dict = to_dict(book)

        cur.close()
        conn.close()
        return json.dumps(book_dict)
    except:
        return None

Enter fullscreen mode Exit fullscreen mode

book_by_id() function has id as a parameter. With this function, we retrieve a row by its ID and return it as JSON. If there is no row with the ID passed, the function returns None.

Updating a record

def update_book(title:str, author, pages_num, review, id:int):
    conn = get_db_connection()
    cur = conn.cursor()
    cur.execute('UPDATE books SET title = %s, author=%s WHERE id = %s RETURNING *;', (title, author, id))
    book = cur.fetchone()[:]
    book_dict = to_dict(book)

    conn.commit()
    cur.close()
    conn.close()

    return json.dumps(book_dict)

Enter fullscreen mode Exit fullscreen mode

We use update_book() controller to update the values of a row. The function returns JSON with the row updated.

Deleting a record

def delete_book(id:int):
    conn = get_db_connection()
    cursor = conn.cursor()
    cursor.execute("DELETE FROM books WHERE id = %s", (id))
    conn.commit()
    conn.close()

    return "Book deleted"

Enter fullscreen mode Exit fullscreen mode

We pass an ID of a row to delete_book, to delete the row.

Endpoints

On app.py file we import all the functions from controllers.py. And write all the endpoints.

POST handler

from robyn import Robyn, status_codes
from controllers import all_books, new_book, book_by_id, delete_book, update_book
import json
from robyn.robyn import Response

app.post("/book")
async def create_book(request):

    body = request.body

    json_body = json.loads(body)

    try:
        book = new_book(json_body['title'], json_body['author'])
        return Response(status_code = status_codes.HTTP_200_OK, headers = {}, body = book)
    except:
        return Response(status_code = status_codes.HTTP_500_INTERNAL_SERVER_ERROR, headers = {}, body = "Internal Server Error")

Enter fullscreen mode Exit fullscreen mode

GET handlers

@app.get("/book")
async def books(request):

    books = all_books()

    return Response(status_code = status_codes.HTTP_200_OK, headers= {}, body = books)


@app.get("/book/:id")
async def get_book(request):
    id = request.path_params["id"]
    book = book_by_id(id)

    try:
        if book == None:
            return Response(status_code = status_codes.HTTP_404_NOT_FOUND, headers = {}, body= "Book not Found")
        else:
            return Response(status_code = status_codes.HTTP_200_OK, headers = {}, body = book)

    except:
        return Response(status_code = status_codes.HTTP_500_INTERNAL_SERVER_ERROR, headers = {}, body = "Internal Server Error")

Enter fullscreen mode Exit fullscreen mode

PUT handler

@app.put("/book/:id")
async def update(request):
    id = request.path_params["id"]

    body = request.body
    json_body = json.loads(body)

    title = json_body['title']
    author = json_body['author']

    book_id = book_by_id(id)

    if book_id == None:
        return Response(status_code = status_codes.HTTP_404_NOT_FOUND, headers = {}, body = "Book not Found")
    else:
        try: 
            book = update_book(title, author, id)
            return Response(status_code = status_codes.HTTP_200_OK, headers = {}, body = book)
        except:
            return Response(status_code = status_codes.HTTP_500_INTERNAL_SERVER_ERROR, headers = {}, body = "Internal Server Error")

Enter fullscreen mode Exit fullscreen mode

Delete handler

@app.delete("/book/:id")
async def delete(request):
    id = request.path_params["id"]

    book_id = book_by_id(id)

    if book_id == None:
        return Response(status_code = status_codes.HTTP_404_NOT_FOUND, headers = {}, body = "Book not Found")
    else:
        try: 
            delete_book(id)
            return Response(status_code = status_codes.HTTP_200_OK, headers = {}, body = "Book deleted")
        except:
            return Response(status_code = status_codes.HTTP_500_INTERNAL_SERVER_ERROR, headers = {}, body = "Internal Server Error")

Enter fullscreen mode Exit fullscreen mode

Complete app.py file.

from robyn import Robyn, StatusCodes, ALLOW_CORS
from controllers import all_books, new_book, book_by_id, delete_book, update_book
import json
from robyn.robyn import Response

app = Robyn( __file__ )

@app.post("/book")
async def create_book(request):

    body = request.body

    json_body = json.loads(body)

    try:
        book = new_book(json_body['title'], json_body['author'])
        return Response(status_code = StatusCodes.HTTP_200_OK.value, headers = {}, body = book)
    except:
        return Response(status_code = StatusCodes.HTTP_500_INTERNAL_SERVER_ERROR.value, headers = {}, body = "Internal Server Error")

@app.get("/book")
async def books(request):

    books = all_books()

    return Response(status_code = StatusCodes.HTTP_200_OK.value, headers= {}, body = books)

@app.get("/book/:id")
async def get_book(request):
    id = request.path_params["id"]
    book = book_by_id(id)

    try:
        if book == None:
            return Response(status_code = StatusCodes.HTTP_404_NOT_FOUND.value, headers = {}, body= "Book not Found")
        else:
            return Response(status_code = StatusCodes.HTTP_200_OK.value, headers = {}, body = book)

    except:
        return Response(status_code = StatusCodes.HTTP_500_INTERNAL_SERVER_ERROR.value, headers = {}, body = "Internal Server Error")

@app.put("/book/:id")
async def update(request):
    id = request.path_params["id"]

    body = request.body
    json_body = json.loads(body)

    title = json_body['title']
    author = json_body['author']

    book_id = book_by_id(id)

    if book_id == None:
        return Response(status_code = StatusCodes.HTTP_404_NOT_FOUND.value, headers = {}, body = "Book not Found")
    else:
        try: 
            book = update_book(title, author, id)
            return Response(status_code = StatusCodes.HTTP_200_OK.value, headers = {}, body = book)
        except:
            return Response(status_code = StatusCodes.HTTP_500_INTERNAL_SERVER_ERROR.value, headers = {}, body = "Internal Server Error")

@app.delete("/book/:id")
async def delete(request):
    id = request.path_params["id"]

    book_id = book_by_id(id)

    if book_id == None:
        return Response(status_code = StatusCodes.HTTP_404_NOT_FOUND.value, headers = {}, body = "Book not Found")
    else:
        try: 
            delete_book(id)
            return Response(status_code = StatusCodes.HTTP_200_OK.value, headers = {}, body = "Book deleted")
        except:
            return Response(status_code = StatusCodes.HTTP_500_INTERNAL_SERVER_ERROR.value, headers = {}, body = "Internal Server Error")    

app.start(port=8000, url="0.0.0.0")

Enter fullscreen mode Exit fullscreen mode

Dockerfile for the Robyn app.

FROM python:3.11

RUN mkdir /code
WORKDIR /code
RUN pip install --upgrade pip
COPY requirements.txt /code/

RUN pip install -r requirements.txt
COPY . /code/

EXPOSE 8000

CMD ["python", "app.py", "0.0.0.0:8000"]

Enter fullscreen mode Exit fullscreen mode

The FROM python:3.11 line tells Docker to use the official Python 3.11 image as the base for the new image.

The RUN mkdir /code line creates a directory called /code in the new image.

The WORKDIR /code line sets the working directory of the new image to /code.

The RUN pip install --upgrade pip line updates the pip package to the latest version.

The COPY requirements.txt /code/ line copies the requirements.txt file into the /code directory of the new image.

The RUN pip install -r requirements.txt line installs the Python packages listed in the requirements.txt file.

The COPY . /code/ line copies the current directory into the /code directory of the new image.

The EXPOSE 8000 line tells Docker that the new image exposes port 8000.

The CMD ["python", "app.py", "0.0.0.0:8000"] line tells Docker to run the app.py file when the image is started. The 0.0.0.0 address means that the server will listen on all interfaces, so it can be accessed from any machine on the network.

Docker Compose file

The documentation says the Docker Compose file is a YAML file to define services, networks, and volumes for a Docker application. Allows us to define a platform-agnostic container-based application.

The computing components of an application are defined as Services. A Service is an abstract concept implemented on platforms by running the same container image (and configuration) one or more times. Services store and share persistent data in Volumes.

We are going to use Volumes further in this tutorial to store our Postgres data.

For more information about the Docker Compose file, visit its documentation.

docker-compose.yml

version: "2.13.0"
services:
  web:
    build: .

    command: python app.py 0.0.0.0:8000
    volumes:
      - .:/code 
    ports:
      - "8000:8000"   
    depends_on:
      - db
  db:
    image: postgres:13
    volumes:
      - postgres_data:/var/lib/postgresql/data/

    ports:
      - "5432:5432"
    restart: always    

volumes:
  postgres_data:

Enter fullscreen mode Exit fullscreen mode

Here we declare two services, a web service and a dbservice. This means we will have a container for the Robyn application and another for the Postgres database.

Also, we have to write that our web service depends on the db service to run. We add the line depends_on to specify this relationship.

While containers can create, update, and delete files, those changes are lost when we stop running the container because all changes are isolated to that container. With volumes, we can change all of this.

This is what the Docker documentation says about Volumes:

Volumes provide the ability to connect specific filesystem paths of the container back to the host machine. If a directory in the container is mounted, changes in that directory are also seen on the host machine. If we mount that same directory across container restarts, wed see the same files.

As Will Vicent explains in this article, we need to create a volumes called postgres_data in our docker-compose.yml and then bind it to a dedicated directory within the container at the location /var/lib/postgresql/data/.

init_db.py

import os
import psycopg2
from dotenv import load_dotenv

load_dotenv()
USER = os.getenv('USER')
PASSWORD = os.getenv('PASSWORD')

def get_db_connection():
    conn = psycopg2.connect(
        dbname = "postgres",
        user = "postgres",
        host = "db",
        password = PASSWORD
    )
    return conn

conn = get_db_connection()
cur = conn.cursor()

....

Enter fullscreen mode Exit fullscreen mode

Now, where docker-compose.yml file is located, we execute this command to build and run the containers:

docker-compose up -d --build

Enter fullscreen mode Exit fullscreen mode

When the containers start running, we use a web browser and navigate to http://localhost:8000/, if we see the "hello world!" message in the browser, it means it worked. Also, we can try to do CRUD operations.

Conclusion

We have successfully Dockerized a Robyn application and connected it to a Postgres database. Docker allows us to package our application and its dependencies in a standardized unit for software development. This makes the application easy to deploy and run in different environments. Docker Compose allows us to define and run multiple Docker containers that make up our application. In this case, we ran two services - the Robyn app container and the Postgres database container.

The complete code is here

Reference

Top comments (0)