I've successfully deployed my Australian postcodes API project to https://railway.app. I did have some problem during deployment. I'm describing how I've addressed this problem. In the process, we're also covering the following: ⓵ running Railway's own Nixpacks Docker build tool locally on Ubuntu 22.10. ⓶ Override the Nixpacks-built Docker image's
CMD: we look at three (3) ways to run the Flask CLI command
venv/bin/flask update-postcode, and similarly, we look at how to override the start command
gunicorn wsgi:app --preload specified in the Nixpacks required Procfile.
These are the two (2) API endpoints hosted on Railway:
Swagger UI Documentation: https://web-production-ed7a.up.railway.app/api/v0/ui .
E.g. To search for localities which contain
spring: https://web-production-ed7a.up.railway.app/api/v0/aust-postcode/spring .
🚀 Full source code and documentation: https://github.com/behai-nguyen/bh_aust_postcode. This repo is now Railway-deployment ready. It includes Railway's required files
Related posts on this Australian postcodes API project. Please note, except for changing to support PostgreSQL, there was no change to functionalities:
- Python: A simple web API to search for Australian postcodes based on locality aka suburb.
- Ubuntu 22.10: hosting a Python Flask web API with Gunicorn and Nginx.
- jQuery plugin: bhAustPostcode to work with the search Australian postcodes web API.
❶ Railway deployment problem.
On a side note, I stumbled upon the Railway website. I was able to set up a PostgreSQL database fairly quickly, I can connect to it using
pgAdmin 4 version
6.18, Windows 10. The documentation is easy to understand, I like it. I played around with it for awhile. I was on a free plan. But the next day, they stated that they can't verify who I am using my GitHub account, so I joined the Hobby Plan. It is only fair, we need to pay for the services.
Before deployment, I was actually thinking that I already have it in the bag 😂, since the database is my biggest concern, and I am pretty sure I have no problem with it. But before talking to the database, the project needs to be successfully deployed.
Railway reported error, these are the last lines of my second deployment log:
... File "/app/wsgi.py", line 1, in <module> from app import app File "/app/app.py", line 3, in <module> from bh_aust_postcode import create_app ModuleNotFoundError: No module named 'bh_aust_postcode' [2023-07-05 10:36:48 +0000]  [INFO] Worker exiting (pid: 9) [2023-07-05 10:36:48 +0000]  [INFO] Shutting down: Master [2023-07-05 10:36:48 +0000]  [INFO] Reason: Worker failed to boot.
To recap, the directory structure of the project is as follows:
/home/behai/webwork/bh_aust_postcode ├── app.py ├── Hosting.md ├── instance ├── LICENSE ├── omphalos-logging.yml ├── Procfile ├── pyproject.toml ├── pytest.ini ├── README.md ├── requirements.txt ├── runtime.txt ├── src │ ├── bh_aust_postcode │ │ ├── api │ │ │ ├── bro.py │ │ │ ├── __init__.py │ │ │ ├── postcode_pool.py │ │ │ └── routes.py │ │ ├── commands │ │ │ ├── schema.sql │ │ │ └── update_postcode.py │ │ ├── config.py │ │ ├── __init__.py │ │ └── utils │ │ ├── __init__.py ├── tests │ ├── conftest.py │ ├── __init__.py │ ├── test_api_endpoints.py │ ├── test_bro.py │ └── test_postcode_pool.py └── wsgi.py
Seeing the error, I did verify that the name of the project root directory
bh_aust_postcode is not important, it can be anything. Looking at Railway's build log, I understand that the root
/app directory is the value of the Docker image environment variable
WORKDIR -- and that should not be a problem!
What I did next was installing Railway's own build tool Nixpacks to my Ubuntu 22.10 machine and did my own build: I did not supply my own
Dockerfile, I want to use the default to closely match the Railway's image.
-- My own built image using Nixpacks produced the same error! Which is somehow... a good thing!
I started to fully qualify all
imports across the entire project. I.e. changing from:
from bh_aust_postcode.config import get_database_connection
from src.bh_aust_postcode.config import get_database_connection
And it finally deployed!
-- Only then I remember addressing this very issue in Python: Docker image build — install required packages via requirements.txt vs editable install! And that was nearly one (1) year ago! But not all wasted, I learn about Nixpacks and a bit more about Docker.
I did not want to use absolute import, but I did in this case. Perhaps if I supply my own
Dockerfile, then I can use editable install and do not have to use absolute import?
❷ Installing and running Railway’s own Nixpacks Docker build tool locally on Ubuntu 22.10.
For HP Pavilion 15 Notebook PC,
Born On Date of
04/October/2014, it is
nixpacks-v1.9.2-amd64.deb, and I copied to it
/home/behai/Public/. Then run the following command to install:
$ sudo dpkg -i /home/behai/Public/nixpacks-v1.9.2-amd64.deb
We need to set the values of the environment variables in the
.env file appropriate for the Docker image.
Content of /home/behai/webwork/bh_aust_postcode/.env:
SECRET_KEY=">s3g;?uV^K=`!(3.#ms_cdfy<c4ty%" FLASK_APP=app.py FLASK_DEBUG=True SOURCE_POSTCODE_URL="http://192.168.0.17/australian_postcodes.json" KEEP_DOWNLOADED_POSTCODES=False DB_CREATE_SCRIPT="schema.sql" SCHEMA_NAME='bh_aust_postcode' POSTCODE_TABLE_NAME='postcode' PGHOST=192.168.0.17 PGDATABASE=ompdev PGUSER=postgres PGPASSWORD=pcb.2176310315865259 PGPORT=5432
Just to save a bit of data usage, I don't want to download the postcodes from https://www.matthewproctor.com/Content/postcodes/australian_postcodes.json every time I do a test run, I store a copy in the default Nginx site, it can be accessed as
192.168.0.17 is the IP address of the Ubuntu 22.10 machine.
The PostgreSQL database server used is the Official Docker image running on Ubuntu 22.10, please see this post for how to set it up. Environment variables
PGPORT specify database connection information.
To build, run the below command. Please note, ⓵ the present working directory is
/home/behai/, ⓶ the name of the resultant Docker image is
$ sudo nixpacks build webwork/bh_aust_postcode --name bh-aust-postcode
The partial build log is shown in the first two (2) screenshots, the resultant Docker image listing is in the last one:
❸ Override the Docker image's
⓵ Three (3) ways to run the Flask CLI command
venv/bin/flask update-postcode for the Docker image.
As noted in the README.MD file, we need to run the following command to download postcodes and populate the database:
$ venv/bin/flask update-postcode
The Railway's equivalence, using its own CLI is:
$ railway run flask update-postcode
I can verify that it works, because I've successfully run it to populate the database hosted by Railway. I've never thought about this before, till now: how do we run commands such as this from a Docker image?
The obvious answer is to run the target Docker image in
bash mode, then run application's commands inside it. Command to get to
bash interactive mode:
$ sudo docker run -it --rm bh-aust-postcode bash
The below screenshot shows
Note: it does not show the Python virtual environment directory
venv, I did supply a local
.gitignore file, and Nixpacks uses it for the build.
And running the
venv/bin/flask update-postcode equivalent command:
root@598d49469064:/app# flask update-postcode
The second approach is to override Docker
$ sudo docker run -it bh-aust-postcode "flask update-postcode"
The third method is to override
ENTRYPOINT. I found the command a little nonintuitive:
$ sudo docker run -it --entrypoint /opt/venv/bin/flask bh-aust-postcode update-postcode
flask it would not work, the error is about executable not found. And its output is identical to the previous two (2):
⓶ Override the start command
gunicorn wsgi:app --preload specified in the Nixpacks required Procfile.
In a similar manner to the previous section, we can run
bh-aust-postcode image with a specific port and non-routable
0.0.0.0 IP address:
$ sudo docker run -it bh-aust-postcode "gunicorn --bind 0.0.0.0:5000 wsgi:app" $ sudo docker run -it --entrypoint /opt/venv/bin/gunicorn bh-aust-postcode --bind 0.0.0.0:5000 wsgi:app
While the container is running, both of the following endpoints API will work locally:
$ curl http://172.17.0.3:5000/api/v0/ui $ curl http://172.17.0.3:5000/api/v0/aust-postcode/spring
172.17.0.3 is the
IPv4Address of the container. We run the image without specifying the container name. To find the container IP address we need to know the container name.
The following command shows all available containers, and whether or not they've stopped or still are running:
$ sudo docker ps -a
Since we're using the default network
$ sudo docker network inspect bridge
Look for the container name in the output, the value of
IPv4Address is the IP address which we should use.
It has been an interesting exercise for me. The deployment process in itself is not that complicated. Involving a database, we'll need to carry a few steps, but that's to be expected. I would like to write about it in a later post. I hope you find the information in this post useful. Thank you for reading and stay safe as always.
Feature image sources: