DEV Community

Felipe Arcaro
Felipe Arcaro

Posted on • Updated on

Dockerizing a Flask application

TL;DR

Here's the docker-compose.yml file to dockerize a Flask application, a Mongo database and a schedule service:

version: '3.8'

services:
  app:
    container_name: my_app_${ENV}
    build:
      context: . 
    ports:
      - 8001:8001
    depends_on:
      - mongo_db
    networks:
      - my_network
    command: ["flask", "run", "--host", "0.0.0.0", "--port", "8001"]
    env_file: .env

  schedule_service:
    container_name: schedule_service_${ENV}
    build:
      context: ./scheduler
    volumes:
      - ./common:/app/common
      - ./db:/app/db  
    networks:
      - my_networknetwork
    command: ["python", "-u", "scheduler.py"] 
    env_file: .env

  mongo_db:
    container_name: mongo_db_${ENV}
    image: mongo:4.4.18
    restart: always
    ports:
      - 27017:27017
    volumes:
      - ./DB-STORAGE/mongo-db-$ENV:/data/db
      - ./db/init-mongo.sh:/docker-entrypoint-initdb.d/init-mongo.sh
    networks:
      - my_network
    env_file: .env

networks:
  my_network:  
    driver: bridge
Enter fullscreen mode Exit fullscreen mode

Run the following command from your project's root directory docker-compose up --build -d to fire it all up.


What is Docker?

Docker is a tool that helps us package up our apps so they can run smoothly no matter where they're deployed – it's like putting our app in a virtual box that contains everything it needs to work - code, settings, and libraries. These boxes are called "containers," and they're lightweight and easy to move around.

With Docker, there's no such a thing as "but it works on my machine..."

Getting started with Dockerfile and Docker Compose

To spin up a container, we can do it directly from the terminal or we can create a file called Dockerfile, which works like a recipe for our app's container. In the Dockerfile, we specify what our app needs to run, like the programming language and dependencies.

If we want to spin up more containers at once, we can use Docker Compose. Docker Compose reads a configuration file – usually called docker-compose.yml, that describes all the containers our app needs to run and how they should interact.

Why should we dockerize an application?

Dockerizing an application offers several benefits:

  • It enhances collaboration and streamlines development workflows – we can work in isolated environments without conflicts, speeding up development and making it easier to onboard new team members
  • It makes our app more portable – we can easily move it between different environments without worrying about compatibility issues which simplifies deployment and ensures consistency across different platforms
  • It improves scalability and resource management - we can easily start/stop containers instances to accommodate fluctuations in traffic

Dockerizing a Flask application

Let's say we've got a Flask application with four main components:

  • network: custom Docker network
  • app: our Flask application
  • mongo_db: a Mongo database
  • schedule: an email schedule service

app

Let's break the Docker Compose code down to understand these components at parameter level.

network

networks:
  quot-network:  
    driver: bridge
Enter fullscreen mode Exit fullscreen mode
  • networks is used to define custom Docker networks in Docker Compose, it allows us to create separate networks for our services, and it helps facilitate communication between the containers running on those networks
  • my_network is the name of the custom network being defined
  • driver specifies the network driver to be used for the custom network

A few notes on Docker networks:
- The bridge driver is the default network driver in Docker and is suitable for most use cases. It enables communication between containers on the same host and provides automatic DNS resolution for service discovery. Each custom network with the "bridge" driver is isolated from the host's default bridge network and other custom bridge networks
- When using the bridge driver, containers on the same network can communicate with each other using their container names as hostnames (e.g., schedule)
- Using the host network instead of bridge allows a container to share the network namespace with the host system. This means the container shares the host's network stack, and network ports used by the container are directly mapped to the host

app

version: '3.8'

services:
  app:
    container_name: my_app_${ENV}
    build:
      context: . 
    ports:
      - 8001:8001
    depends_on:
      - mongo_db
    networks:
      - my_network
    command: ["flask", "run", "--host", "0.0.0.0", "--port", "8001"]
    env_file: .env

Enter fullscreen mode Exit fullscreen mode
  • version: '3.8' specifies the version of the Docker Compose file syntax being used - in this case, it's version 3.8
  • services is the top-level key that defines the services/containers that will be managed by Docker Compose
  • app the name of the service/container being defined
  • container_name specifies the custom name for the container that will be created based on this service. The variable ${ENV} dynamically sets the suffix based on an environment variable. For example, if the value of ${ENV} is "production", the container name will be my_app_production.
    • The .env file should be placed at the root of the project directory next to our docker-compose.yml
  • build indicates that the service will be built using a Dockerfile located in the current directory (denoted by .). The context parameter defines the build context, which means the current directory and its subdirectories will be used to find the necessary files for the build
  • ports exposes ports from the container to the host - it will map port 8001 from the host to port 8001 in the container, that means any traffic coming to port 8001 on the host will be forwarded to port 8001 of the container
  • depends_on specifies that this service depends on another service called mongo_db. It ensures that the mongo_db is up and running before this service starts
  • networks attaches the service to a pre-existing Docker network, this allows both the app service and other services connected to communicate with each other
  • command overrides the default command in the Dockerfile that would be executed when the container starts. The app will run with the following parameters: flask run --host 0.0.0.0 --port 8001, meaning Flask will listen on all available network interfaces (0.0.0.0) and port 8001
    • When we define the command parameter in the Docker Compose file for a service, it takes precedence over the default CMD command specified in the Dockerfile
  • env_file specifies the file from which environment variables should be read and passed to the container

For reference, this is the service's Dockerfile:

FROM python:3.8-slim-buster as base 

# Create app directory
WORKDIR /app

# Install Python requirements first so it's cached
COPY ./requirements.txt .
RUN pip3 install -r requirements.txt

# Copy Flask project to container
COPY . .

# Set Flask configurations
ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0

##############

FROM base as DEV

RUN pip3 install debugger
RUN pip3 install debugpy

# Define Flask environment (production is default)
ENV FLASK_ENV=development 

CMD ["python", "-m", "debugpy",\
    "--listen", "0.0.0.0:5678",\
    "--wait-for-client",\
    "-m", "flask", "run", "--host", "0.0.0.0", "--port", "8000"]

##############

FROM base as PROD

CMD ["flask", "run"]

Enter fullscreen mode Exit fullscreen mode

We won't get into how to deploy it to production in this article but I wanted to quickly mention that using Gunicorn - a popular WSGI (Web Server Gateway Interface) HTTP server for Python web applications is probably a good idea:

# Run the app with Gunicorn 
CMD ["gunicorn", "--bind", "0.0.0.0:8001", "our_app:app"]
Enter fullscreen mode Exit fullscreen mode

mongo db

  mongo_db:
    container_name: mongo_db_${ENV}
    image: mongo:4.4.18
    restart: always
    ports:
      - 27017:27017
    volumes:
      - ./DB-STORAGE/mongo-db-$ENV:/data/db
      - ./db/init-mongo.sh:/docker-entrypoint-initdb.d/init-mongo.sh 
    networks:
      - my_network
    env_file: .env
Enter fullscreen mode Exit fullscreen mode
  • container_name well, we already know :)
  • image specifies the Docker image to be used for the container. In this case, it's using the official MongoDB image with version 4.4.18 from Docker Hub
  • restart indicates the restart policy for the container - always means the container will be automatically restarted if it exits, regardless of the exit status
  • ports maps port 27017 from the host to port 27017 in the container (default MongoDB port)
    • Usually, it's best to keep our database private and only let other services (containers) access it. But for development purposes, we can expose it and use tools like MongoDB Compass
  • volumes mounts directories or files from the host into the container. This is used to persist data and configuration files

    • ./DB-STORAGE/mongo-db-$ENV:/data/db will mount a host directory named ./DB-STORAGE/mongo-db-$ENV into the /data/db directory inside the container. This allows the MongoDB data to be stored persistently on the host filesystem
    • ./db/init-mongo.sh:/docker-entrypoint-initdb.d/init-mongo.sh will mounts the init-mongo.sh file from the host into the /docker-entrypoint-initdb.d/init-mongo.sh path in the container which is the official MongoDB Docker image entry point.
  • networks attaches the service to a pre-existing Docker network called "my_network". This allows the MongoDB service and other services connected to "my_network" to communicate with each other.

  • env_file the environment variables from .env file will be used in the entrypoint script (init-mongo.sh) to configure MongoDB settings during startup

This is the init-mongo.sh file:

mongo -u "$MONGO_INITDB_ROOT_USERNAME" -p "$MONGO_INITDB_ROOT_PASSWORD" --authenticationDatabase admin <<EOF

use my_db;

db.createUser({
    user: "$MONGO_USER",
    pwd: "$MONGO_PASSWORD",
    roles: 
        [
            { 
                role: "readWrite", 
                db: "my_db" 
            },
            { 
                role: "dbAdmin", 
                db: "my_db" 
            }     
        ] 
});
Enter fullscreen mode Exit fullscreen mode

If we want our MongoDB user to have both read/write access to an existing database and the ability to create collections and documents, we should grant it both the readWrite role and the dbAdmin role.

schedule

schedule_service:
    container_name: schedule_service_${ENV}
    build:
      context: ./schedule
    volumes:
      - ./common:/app/common
      - ./db:/app/db  
    networks:
      - my_network
    command: ["python", "-u", "schedule.py"] 
    env_file: .env
Enter fullscreen mode Exit fullscreen mode
  • The context parameter defines the build context, which means the ./schedule directory and its subdirectories will be used to find the necessary files for the build

When we're only using a Dockerfile, we can specify the Dockerfile's location and the context (root directory) but when we're working with Docker Compose, we can only specify the context, which means we cannot go up in the directory from where the Dockerfile is

  • docker build -f $RelativePathToSourceCode -t $AppName . - in that case -f $RelativePathToSourceCode defines where our Dockerfile is but the . at the end defines the context (root directory)
  • In this case, we still want to use some common packages from the application, so we can map them using volumes
  • We already know that command overrides the default command that would be executed when the container starts. In this case, the script will be executed with the Python interpreter in unbuffered mode (-u flag) to ensure that the output is immediately displayed in the container logs

In summary, the schedule_service service in the Docker Compose file builds a container from the ./schedule directory, mounts specific directories from the host into the container, and runs the Python script schedule.py as the main command for the container. Additionally, it connects the container to the my_network for communication with other services on the same network and reads environment variables from the .env file.

Firing it all up

Now that we have everything set up, it's time to fire it all up.

Running docker-compose build --no-cache followed by docker-compose up is the best way to ensure that our Docker containers are built from scratch without using any cached layers from previous builds.

Alternatively, using docker-compose up --build will rebuild the images for all services, ignoring any previously cached layers. This ensures that we have the latest version of our application and all its dependencies.

The docker-compose down command both stops and removes containers associated with our Docker Compose project, but the exact behavior depends on the options used with the command.

By default, docker-compose down does the following:

  • Stops all the containers defined in our docker-compose.yml file that are currently running as part of the project. The containers are gracefully stopped, and their resources are released
  • After stopping the containers, docker-compose down also removes the containers

To clean it all up, we can use docker-compose down --rmi all.

Top comments (0)