DEV Community

Hana Belay for Documatic

Posted on • Updated on

Start a Production-Ready Dockerized Django Project

Making a Django app production-ready inside Docker is quite useful for developers. It minimizes the hassle of setup and deployment. This allows developers to focus on what’s important i.e. development and business logic.

Table of Contents

  1. Prerequisite
  2. Introduction
  3. Project Configuration
  4. Split Settings for Different Environments
  5. Environment Variables
  6. Postgres Configuration
    1. Ensure Postgres is healthy before Django is started
  7. Celery and Redis Configuration
  8. Tweak Docker Compose for Production
  9. Conclusion

Prerequisite

This guide assumes that you are familiar with the following technologies:

  • Intermediate Django
  • Beginner to Intermediate Docker
  • Familiarity with Postgres, Celery, Redis, Nginx

Introduction

This guide is aimed at helping you start and organize your Django project to work in different environments mainly, development and production. You can then take this template, modify it to fit your specific requirements, and finally deploy it on your choice of a cloud service provider like AWS, Azure, or Digital Ocean to name a few.

Note:- If you encounter any issues throughout the tutorial, you can check out the code in the GitHub repository

Project Configuration

First, create a repo on GitHub. Initialize the repository with a README file and .gitignore template for Python.

GH Repo

Now, on your machine, open up a terminal and run the following commands to set up and open your project.

mkdir django-docker-template
cd django-docker-template
git clone <link-to-repo> .
code .
Enter fullscreen mode Exit fullscreen mode

In the root directory of your project, create a file named requirements.txt

touch requirements.txt
Enter fullscreen mode Exit fullscreen mode

and add the following dependencies:

celery==5.2.7
Django==4.1.2
gunicorn==20.1.0
psycopg2-binary==2.9.5
python-decouple==3.6
redis==4.3.4
Enter fullscreen mode Exit fullscreen mode

Then, create the Dockerfile

touch Dockerfile
Enter fullscreen mode Exit fullscreen mode

and add the following snippet:

FROM python:3.10.2-slim-bullseye

ENV PIP_DISABLE_PIP_VERSION_CHECK 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

WORKDIR /code

COPY ./requirements.txt .
RUN pip install -r requirements.txt

COPY . .
Enter fullscreen mode Exit fullscreen mode

Then, create a docker-compose.yml file:

touch docker-compose.yml
Enter fullscreen mode Exit fullscreen mode

and add a web service inside it:

version: "3.9"

services:
  web:
    build: .
    volumes:
      - .:/code
    ports:
      - 8000:8000
Enter fullscreen mode Exit fullscreen mode

Finally, create a .dockerignore file so that Docker will ignore some files thus speeding up the build process of your image.

touch .dockerignore
Enter fullscreen mode Exit fullscreen mode

Add the following inside it:

.venv
.git
.gitignore
Enter fullscreen mode Exit fullscreen mode

Great, build your image by running the following command:

docker-compose build
Enter fullscreen mode Exit fullscreen mode

This will take some time. You can now use this image to create the Django project.

docker-compose run --rm web django-admin startproject config .
Enter fullscreen mode Exit fullscreen mode

Split Settings for Different Environments

It is important to take into account the different environments/modes your project will be running on: usually, these are development and production. However, you can apply a similar logic for other environments you may need to include.

You can split your settings to dictate which environment your project is running on, similar to what is presented below:

config
│
└───settings
│   │   __init__.py
│   │   base.py
│   │   development.py
│   │   production.py
Enter fullscreen mode Exit fullscreen mode

base.py will contain the common settings used regardless of the environment. Hence, copy all the content of settings.py which Django created by default into settings/base.py and delete settings.py as it is no longer needed.

Then, import base.py in both environments. Environment-specific settings will be updated later.

# import this in development.py, production.py
from .base import *
Enter fullscreen mode Exit fullscreen mode

Environment Variables

Using environment variables allows you to describe different environments. python decouple is one of the most commonly used packages to strictly separate settings from your source code. This package is added earlier in requirements.txt so just create a .env file in the root directory of your project:

touch .env
Enter fullscreen mode Exit fullscreen mode

And add the following variables:

SECRET_KEY=

ALLOWED_HOSTS=.localhost, .herokuapp.com, .0.0.0.0
DEBUG=True

DJANGO_SETTINGS_MODULE=config.settings.development
Enter fullscreen mode Exit fullscreen mode

Update your settings accordingly:

# base.py

from decouple import config, Csv

SECRET_KEY = config("SECRET_KEY")

DEBUG = config("DEBUG", default=False, cast=bool)

ALLOWED_HOSTS = config("ALLOWED_HOSTS", cast=Csv())
Enter fullscreen mode Exit fullscreen mode

DJANGO_SETTINGS_MODULE tells Django which setting to use. By providing its value in an environment variable, manage.py will be able to automatically use the appropriate setting for different environments. Therefore update manage.py as follows:

# manage.py

from decouple import config

os.environ.setdefault("DJANGO_SETTINGS_MODULE", config("DJANGO_SETTINGS_MODULE"))
Enter fullscreen mode Exit fullscreen mode

Also update the web service in docker-compose.yml file to pull in environment variables from the .env file:

version: "3.9"

services:
  web:
    build: .
    volumes:
      - .:/code
    env_file:
      - ./.env
    ports:
      - 8000:8000
Enter fullscreen mode Exit fullscreen mode

Now, what are some of the potential environment-specific settings? Here are some of them:

1) Email

You can use the console backend in development mode to write emails to the standard output.

# development.py

from .base import *

EMAIL_BACKEND = "django.core.mail.backends.console.EmailBackend"
Enter fullscreen mode Exit fullscreen mode

While in production mode, use an SMTP backend like SendGrid, Mailgun, etc…

# production.py

from .base import *

EMAIL_BACKEND = "django.core.mail.backends.smtp.EmailBackend"
EMAIL_HOST = "'smtp.mailgun.org'"
EMAIL_PORT = 587
EMAIL_HOST_USER = config("EMAIL_USER")
EMAIL_HOST_PASSWORD = config("EMAIL_PASSWORD")
EMAIL_USE_TLS = True
Enter fullscreen mode Exit fullscreen mode

2) Media and Static files

In production mode, you may want to use services like AWS S3 to serve your static and media files. Having multiple settings comes in handy in such scenarios.

# development.py

MEDIA_URL = "/media/"
MEDIA_ROOT = os.path.join(BASE_DIR, "../", "mediafiles")

STATIC_URL = "static/"
STATIC_ROOT = os.path.join(BASE_DIR, "../", "staticfiles")
Enter fullscreen mode Exit fullscreen mode

And then you can add AWS-related configs in production.py

3) Caching

Ideally, you don’t need to cache your site in development so you can separately add a cache server like Redis in production.py file

# production.py

# Redis Cache
CACHES = {
    "default": {
        "BACKEND": "django.core.cache.backends.redis.RedisCache",
        "LOCATION": config("REDIS_BACKEND"),
    },
}
Enter fullscreen mode Exit fullscreen mode

In addition, you can add apps, middleware, etc. separately for your environments.

Postgres Configuration

To configure Postgres, you first need to add a new service to the docker-compose.yml file:

version: "3.9"

services:
  web:
    build: .
    command: python manage.py runserver 0.0.0.0:8000
    volumes:
      - .:/code
    env_file:
      - ./.env
    ports:
      - 8000:8000
    depends_on:
      - db
  db:
    image: postgres:13
    volumes:
      - postgres_data:/var/lib/postgresql/data/
    environment:
      - POSTGRES_USER=${DB_USERNAME}
      - POSTGRES_PASSWORD=${DB_PASSWORD}
      - POSTGRES_DB=${DB_NAME}

volumes:
  postgres_data:
Enter fullscreen mode Exit fullscreen mode

Next, update the .env file to include database-related variables:

# Database
DB_NAME=
DB_USERNAME=
DB_PASSWORD=
DB_HOSTNAME=db
DB_PORT=5432
Enter fullscreen mode Exit fullscreen mode

Finally, update the settings to use Postgres RDBMS instead of the SQLite engine Django uses by default.

# base.py

# Remove the sqlite engine and add this
DATABASES = {
    "default": {
        "ENGINE": "django.db.backends.postgresql",
        "NAME": config("DB_NAME"),
        "USER": config("DB_USERNAME"),
        "PASSWORD": config("DB_PASSWORD"),
        "HOST": config("DB_HOSTNAME"),
        "PORT": config("DB_PORT", cast=int),
    }
}
Enter fullscreen mode Exit fullscreen mode

Note:- You may want to use different databases for development and production. If that’s the case, you can remove the DATABASES setting from base.py and add different databases for development and production.

Great! Now, re-build the container to ensure what you have so far is working.

docker-compose build
Enter fullscreen mode Exit fullscreen mode

Also, ensure that the migrations are applied:

docker-compose run --rm web python manage.py migrate
Enter fullscreen mode Exit fullscreen mode

Ensure Postgres is healthy before Django is started

Usually, when working with Postgres and Django in Docker, the web service (Django) tries to connect to the db service even when db service is not ready to accept connections. To solve this issue, you can create a short bash script that will be used in the Docker ENTRYPOINT command.

In the root directory of your project create a file named entrypoint.sh

touch entrypoint.sh
Enter fullscreen mode Exit fullscreen mode

Add the following script to listen to the Postgres database port until it is ready to accept connections and then apply migrations and collect static files.

#!/bin/sh

echo 'Waiting for postgres...'

while ! nc -z $DB_HOSTNAME $DB_PORT; do
    sleep 0.1
done

echo 'PostgreSQL started'

echo 'Running migrations...'
python manage.py migrate

echo 'Collecting static files...'
python manage.py collectstatic --no-input

exec "$@"
Enter fullscreen mode Exit fullscreen mode

Update the file permissions locally

chmod +x entrypoint.sh
Enter fullscreen mode Exit fullscreen mode

Now, to use this script, you need to have Netcat installed on your image. Therefore, update Dockerfile to install this networking utility and use the bash script as a Docker entrypoint command.

FROM python:3.10.2-slim-bullseye

ENV PIP_DISABLE_PIP_VERSION_CHECK 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

WORKDIR /code

COPY ./requirements.txt .

RUN apt-get update -y && \
    apt-get install -y netcat && \
    pip install --upgrade pip && \
    pip install -r requirements.txt

COPY ./entrypoint.sh .
RUN chmod +x /code/entrypoint.sh

COPY . .

ENTRYPOINT ["/code/entrypoint.sh"]
Enter fullscreen mode Exit fullscreen mode

Rebuild the image and spin up the containers.

docker-compose up --build
Enter fullscreen mode Exit fullscreen mode

Go to http://localhost:8000/

Celery and Redis Configuration

Celery does time-intensive tasks asynchronously in the background so that your web app can continue to respond quickly to users’ requests. Use Redis together with Celery since it can serve as both a message broker and a database back end at the same time.

Add Redis and Celery services to docker-compose.yml

redis:
    image: redis:7

celery:
  build: .
  command: celery -A config worker -l info
  volumes:
    - .:/code
  env_file:
    - ./.env
  depends_on:
    - db
    - redis
    - web
Enter fullscreen mode Exit fullscreen mode

While at it, update the web service as well:

depends_on:
    - redis
    - db
Enter fullscreen mode Exit fullscreen mode

Once that is set up, navigate to the config folder and create a file named celery.py

cd config
touch celery.py
Enter fullscreen mode Exit fullscreen mode

Then, add the following snippet inside it:

# config/celery.py

import os

from decouple import config
from celery import Celery

os.environ.setdefault("DJANGO_SETTINGS_MODULE", config("DJANGO_SETTINGS_MODULE"))
app = Celery("config")
app.config_from_object("django.conf:settings", namespace="CELERY")
app.autodiscover_tasks()
Enter fullscreen mode Exit fullscreen mode

Next, head over to base.py and add the following configuration at the bottom:

# settings/base.py

# Celery
CELERY_BROKER_URL = config("CELERY_BROKER_URL")
CELERY_RESULT_BACKEND = config("REDIS_BACKEND")
Enter fullscreen mode Exit fullscreen mode

Update .env to include the above environment variables:

# Celery
CELERY_BROKER_URL=redis://redis:6379/0

# Redis
REDIS_BACKEND=redis://redis:6379/0
Enter fullscreen mode Exit fullscreen mode

The final update goes into the __init__.py file of the config folder:

# config/__init__.py

from .celery import app as celery_app

__all__ = ('celery_app',)
Enter fullscreen mode Exit fullscreen mode

Test it out again:

docker-compose up --build
Enter fullscreen mode Exit fullscreen mode

Tweak Docker Compose for Production

Django’s built-in server is not suitable for production so you should be using a production-grade WSGI server like Gunicorn in a production environment.

In addition, you should also consider adding Nginx to act as a reverse proxy for Gunicorn and serve static files.

Therefore, create a file named docker-compose.prod.yml at the root of your project and add/update the following services:

version: "3.9"

services:
  web:
    build: .
    restart: always
    command: gunicorn config.wsgi:application --bind 0.0.0.0:8000
    env_file:
      - ./.env
    expose:
      - 8000
    volumes:
      - static_volume:/code/staticfiles
      - media_volume:/code/mediafiles
    depends_on:
      - redis
      - db
  db:
    image: postgres:13
    restart: always
    volumes:
      - postgres_data:/var/lib/postgresql/data/
    environment:
      - POSTGRES_USER=${DB_USERNAME}
      - POSTGRES_PASSWORD=${DB_PASSWORD}
      - POSTGRES_DB=${DB_NAME}

  redis:
    image: redis:7

  celery:
    build: .
    restart: always
    command: celery -A config worker -l info
    volumes:
      - .:/code
    env_file:
      - ./.env
    depends_on:
      - db
      - redis
      - web

  nginx:
    build: ./nginx
    restart: always
    ports:
      - ${NGINX_PORT}:80
    volumes:
      - static_volume:/code/staticfiles
      - media_volume:/code/mediafiles
    depends_on:
      - web

volumes:
  postgres_data:
  static_volume:
  media_volume:
Enter fullscreen mode Exit fullscreen mode

There are a couple of things worth noting from the above file:

  • The use of expose instead of ports. This allows the web service to be exposed to other services inside Docker but not to the host machine.
  • Static and media volumes to persist data generated by and used by web and nginx services.

Don’t forget to update .env to include the NGINX_PORT environment variable:

# NGINX
NGINX_PORT=80
Enter fullscreen mode Exit fullscreen mode

Then, in your project root directory, create the following folder and files:

mkdir nginx
cd nginx
touch Dockerfile
touch nginx.conf
Enter fullscreen mode Exit fullscreen mode

Update the respective files:

# nginx/Dockerfile

FROM nginx:stable-alpine

RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d

EXPOSE 80
Enter fullscreen mode Exit fullscreen mode
  • The above file pulls the base Nginx image, removes the default configuration, and copies the one that you created i.e. nginx.conf with the following content:
# nginx/nginx.conf

upstream web_app {
    server web:8000;
}

server {

    listen 80;

    location /static/ {
        alias /code/staticfiles/;
    }

    location /media/ {
        alias /code/mediafiles/;
    }

    location / {
        proxy_pass http://web_app;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
        proxy_redirect off;
    }

}
Enter fullscreen mode Exit fullscreen mode
  • Worth noting in the above configuration is that static and media file requests are routed to the static files and media files folders respectively.

Test your production setup locally:

docker-compose -f docker-compose.prod.yml up --build
Enter fullscreen mode Exit fullscreen mode

Go to http://localhost/ The static files should be loaded correctly as well.

Conclusion

This tutorial has walked you through containerizing your Django application both for local development and production. In addition to the ease of containerized deployment, working inside Docker locally is also time-saving because it minimizes the setup you need to configure on your machine.

If you got lost somewhere throughout the guide, check out the project on GitHub

Happy coding! 🖤

Top comments (6)

Collapse
 
eduardomm profile image
Eduardo Maciel de Mello

Thank you for your contribution, it helped me a lot in creating two environments easily.

Collapse
 
gojjo profile image
Gojjo

Great tutorial! One thing missing at the beginning, you have .env file to be copied in your docker-compose services, but no instruction to create the .env file.

Collapse
 
earthcomfy profile image
Hana Belay

Thanks. There is an instruction to create .env file dev.to/documatic/start-a-productio...

Collapse
 
gojjo profile image
Gojjo

Yeah, that instruction is further down. The docker-compose build command earlier in the tutorial will throw an error if that .env file is not created before executing it. You may want to move that instruction up?

Thread Thread
 
earthcomfy profile image
Hana Belay

I see now. thanks for pointing it out. I will update it

Collapse
 
petervermeulen profile image
Peter vermeulen

What about when you want to deploy this project to a ubuntu server, and host it on a domain with HTTPS/SSL?