DEV Community

Michael Jones
Michael Jones

Posted on • Updated on

Django to Phoenix - Part 1: Docker for Dev

Background

I have a website built with Django and I would like to start writing parts of it in Phoenix.

Starting Point

I deploy the website using Docker but when it comes to development, I run Django's local server directly. I have been advised that the best strategy for introducing a new language & framework into a website is to use nginx to split the incoming requests based on URLs.

This means that we could run & configure nginx on the local machine, or as we are already using Docker for deployment, we could start doing development using a Docker container as well and then add the nginx layer into that. That feels a lot more contained & less messy.

Setting up Django in Docker

In order to take that step, we're going to want to transition to getting the current Django set up to work in Docker.

I think that ideally I'd be able to use the same Dockerfile for both development & production in order to have the environments match as closely as possible. However when I look at my production Dockerfile, I'm not sure how to make that happen.

So we're going to start with a new development Dockerfile which we're going to call Dockerfile.dev.

from ubuntu:14.04

run apt-get update -y

# Install dependencies for builds
run apt-get install -y git python python-pip make g++ wget curl libxml2-dev \
        libxslt1-dev libjpeg-dev libenchant-dev libffi-dev supervisor \
        libpq-dev python-dev realpath apt-transport-https

# Install yarn from website
run curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
run echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list

run apt-get update -y
run apt-get install -y yarn

# Install node-js via the nave.sh helper script so that we 
# can build our client side applications 
add https://raw.githubusercontent.com/isaacs/nave/master/nave.sh ./nave.sh

run bash nave.sh usemain 6.9.1

# Install python packages
workdir /src
copy ./requirements.txt .
run pip install --upgrade pip
run pip install -r requirements.txt

# Create source code directory
run mkdir dancetimetable

# Copy over various config files & scripts
copy ./dev/supervisord.conf /etc/supervisor/conf.d/supervisord.conf

run mkdir /src/dev

copy ./dev/run-* /src/dev/
copy ./meta.json /src/meta.json

expose 8000

# Run supervisord on launch
cmd /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf
Enter fullscreen mode Exit fullscreen mode

This is a fairly standard Dockerfile to my understanding. For those unfamiliar with Docker, the file starts with from ubuntu:14.04 which says 'give me a blank ubuntu 14.04' image to begin with and then change it with the commands in the rest of the file. The run command runs a particular command within that image, saving any effects on the system. The copy command allows you to copy files from your working directory on your machine into the Docker image.

Interestingly whilst we copy in dependencies & configuration files, we don't copy in any of the website source code here. This is because when we run the Docker container we 'mount' folders from our local machine into the container. This is important so that when we edit & save a file on our machine, the processes inside the Docker container see it straight away and can use it. Without this we would have to rebuild the Docker image each time we make a change.

Update 28-10-2017: Someone noted that I'm still using Ubuntu 14.04 which is quite old at this point. I'm doing this as my production environment still uses 14.04. I do plan to update but for the moment that version is still fully maintained and I don't need anything more from it.

So how do we mount these directories when running the Docker container? With a command like this:

#!/bin/bash

docker run \
    -p 127.0.0.1:8000:8000 \
    -v `pwd`/logs/gunicorn:/var/log/gunicorn:rw \
    -v `pwd`/logs/nginx:/var/log/nginx:rw \
    -v `pwd`/logs/cron:/var/log/cron:rw \
    -v `pwd`/dancetimetable:/src/dancetimetable:rw \
    -e DANCETIMETABLE_SETTINGS=development \
    --rm \
    --interactive \
    --tty \
    --name dance \
    michaeljones/dancetimetable:dev
Enter fullscreen mode Exit fullscreen mode

The important line is -vpwd/dancetimetable:/src/dancetimetable:rw which is where we say 'please make the /src/dancetimetable directory in the running container, reflect the contents of the local dancetimetable directory, in a read & write manner.'

The other important line is -p 127.0.0.1:8000:8000 which says 'please expose port 8000 from inside the container as the port 8000 on our machine.' This means that we can visit http://localhost:8000 and view our app. If we don't tell Docker to do this then we can't access our website.

You might notice a couple of details are missing here. For example, how are we running the Django development server inside the container? The answer to that lies in the last line of the Dockerfile. This specifies the command we want to run when launching the Docker container. In our case it is:

cmd /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf
Enter fullscreen mode Exit fullscreen mode

This runs the supervisord program with our specific configuration. Supervisord is a program that can be instructed to launch other programs and monitor them, making sure that they are restarted if they fail. This seems to be the recommended way to run multiple programs in a Docker container as the Dockerfile only allows you to specify one cmd entry at the end.

Whilst we don't strictly need supervisord at the moment as we've only got one command to run, I'm already familiar with supervisord from my production setup and we're going to need to run another command soon anyway.

So what does the supervisord config look like? Something like this:

[supervisord]
nodaemon=true

[program:django]
command=/src/dev/run-django
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
Enter fullscreen mode Exit fullscreen mode

The first block under supervisord is for general configuration and the nodaemon option tells supervisord to run in the foreground so that that Docker container doesn't lose track of it and stop prematurely.

The second block indicates how we want to run the Django development server. The command entry points to a script and the other settings it supervisord to redirect all standard out & standard error from Django to the container's standard out & standard error so that we get to see it in the console.

So what's in the run-django script? That looks like this:

#!/bin/bash

cd /src/dancetimetable

export DOCKER_HOST=`netstat -nr | grep '^0\.0\.0\.0' | awk '{print $2}'`

python manage.py z runserver 0.0.0.0:8000
Enter fullscreen mode Exit fullscreen mode

This changes directory to where the Django manage.py file is and runs the regular runserver sub-command to get our development server running.

The two extra details are, firstly, that we need to specify 0.0.0.0 as the address rather than localhost or 127.0.0.1. I do not know why. Secondly that find the address of our host machine from the perspective of the Docker container and make that available as DOCKER_HOST. We pick this up in our Django config files when specifying the HOST in the database settings. This is because I'm not running my Postgres database inside the Docker container. It is running on my local machine outside of the container and so Django needs to know how to connect to it. This is the only way I've found to do it.

Conclusion

With all this in place, we can launch our Docker container and visit https://localhost:8000 to see our website. Now we have Django running inside a Docker container we're in a good place to add Nginx and then start introducing Phoenix.

If you have any questions or advice, please comment below.

Update 28-10-2017: I have edited the Dockerfile code block above to remove nginx elements that will be introduced and discussed in the next post.

Top comments (2)

Collapse
 
jeansberg profile image
Jens Genberg

Really interesting! I'm currently familiarizing myself with Docker and reading about a real life usecase is helpful.

Collapse
 
michaeljones profile image
Michael Jones

Thanks for saying that :)