Waiting for a database server to be ready before starting our own application, such as a middle-tier server, is a familiar issue. Docker Compose is no exception. Our own application container must also wait for their own database server container ready to accept requests before sending requests over. I've tried two ( 2 ) “wait for” tools which are officially recommended by Docker. I'm discussing my attempts in this post, and describing some of the pending issues I still have.
Table of contents
- Reference Documents, Tutorials and Posts
- Other Docker Posts Which I've Written
- Windows 10 Pro -- version 10.0.19045 Build 19045.
- Windows “docker” CLI ( Docker Engine ) -- version 20.10.17, build de40ad0.
- Windows “docker-compose” CLI -- version 1.29.2, build 5becea4c.
- Windows Docker Desktop -- version 4.11.0.
- mysql:8.0.30-debian -- this is a MySQL Docker Official Image, version 8.0.30. It is running on the Windows 10 machine.
- python:3.10.5-slim-buster -- this WAS a Python Docker Official Image, I downloaded it a few months back. I checked just now, it is not listed there anymore, but I am guessing a closer version would still do for this post.
I have used official Docker images in my development environment. I've also attempted to build images for my own understanding. Before starting Compose, I've checked out official documentations and some tutorials.
I've worked with a multi-tier application before. Sitting between the web front-end and the database server is our own application data server written as a Windows service: if a server machine must restart, our application data server must wait for the target database server to be ready before starting itself.
At the outset, none of the visible tutorials on Compose which I've come across address the waiting issue, even though an application container and a database container are present; and official Docker documents, on the hand, sporadically mentions that this is an issue, but they don't immediately point to the actual document that addresses this issue!
- https://docs.docker.com/compose/ -- an overview of Compose.
- This is the tutorial which jumps start me on Compose: How to use MySQL with Docker and Docker compose a beginners guide. It does not address the waiting issue, I write a simple Python script which runs only a query and prints out the rows. I expected it not to work consistently all the times, and it did not. I repeatedly run it, and there're times when the MySQL server container does not start on time.
- Further Googling, I found this Docker document Control startup and shutdown order in Compose, whereby several tools are recommended to implement the waiting, among them are wait-for-it and Wait4X.
- I don't remember exactly how, but I found this Stack Overflow post Docker-compose check if mysql connection is ready, where:
- Not directly related to this post, but the first Docker tutorial I took was Learn to build and deploy your distributed applications easily to the cloud with Docker, it's an excellent tutorial and does also cover Compose.
On mysql:8.0.30-debian Docker image build, I've also written two related posts:
Docker on Windows 10: running mysql:8.0.30-debian with a custom config file --
the --mounts configuration are re-used in this post:
- --mount type=bind,source=//e/mysql-config,target=/etc/mysql/conf.d
- --mount source=mysqlvol,target=/var/lib/mysql
- Docker on Windows 10: mysql:8.0.30-debian log files.
python:3.10.5-slim-buster is a Debian GNU/Linux 10 (buster). Follow the Debian package link given by wait-for-it, we'll eventually find this link https://packages.debian.org/source/oldoldstable/wait-for-it, I downloaded the wait-for-it_0.0~git20160501.orig.tar.gz file and extracted wait-for-it.sh out to the project root directory where setup.py, app.py, .dockerignore, Dockerfile and docker-compose.yml are.
I will not list the content of .dockerignore as it is application specific. Dockerfile is somewhat irrelevant is the context of this discussion, except for the Python environment file .env-docker.
# syntax=docker/dockerfile:1 FROM python:3.10.5-slim-buster WORKDIR /book_keeping COPY . . EXPOSE 8000 RUN /usr/local/bin/python -m pip install --upgrade pip \ && pip3 install -e . \ && pip3 install bh_utils-1.0.0-py3-none-any.whl \ && pip3 install bh_validator-1.0.0-py3-none-any.whl RUN chmod +x wait-for-it.sh RUN rm bh_utils-1.0.0-py3-none-any.whl \ && rm bh_validator-1.0.0-py3-none-any.whl \ && mv .env-docker .env CMD [ "python", "-m" , "flask", "run", "--host=0.0.0.0" ]
RUN chmod +x wait-for-it.sh
We are going to run wait-for-it.sh later in Compose, I'm giving it execute permission in readiness.
The Python environment file .env-docker is the same as my local development one, except:
SQLALCHEMY_DATABASE_URI = mysql+mysqlconnector://behai1:password@mysql_db/ompdev1
where the database host is mysql_db -- this is the service name of the MySQL container in the docker-compose.yml file. And according to Networking in Compose:
Each container can now look up the hostname web or db and get back the appropriate container’s IP address. For example, web’s application code could connect to the URL postgres://db:5432 and start using the Postgres database.
version: "3.9" services: mysql_db: image: mysql:8.0.30-debian cap_add: - SYS_NICE restart: always environment: - MYSQL_DATABASE=ompdev1 - MYSQL_ROOT_PASSWORD=pcb.2176310315865259 ports: - '3306:3306' volumes: - type: bind source: //e/mysql-config target: /etc/mysql/conf.d - type: volume source: mysqlvol target: /var/lib/mysql app: container_name: book-keeping restart: always build: . image: book-keeping depends_on: - mysql_db ports: - '8000:8000' command: ./wait-for-it.sh -t 40 mysql_db:3306 -- python ./app.py #command: python -m flask run --host=0.0.0.0:8000 #command: python3 -m flask run command: flask run -h 0.0.0.0 -p 8000 volumes: mysqlvol: external: true
In the above Compose file, wait-for-it.sh is called in pretty much the same manner as it is documented. mysql_db is the MySQL database server address as discussed previously; 3306 is the default port:
command: ./wait-for-it.sh -t 40 mysql_db:3306 -- python ./app.py
We will go through some of the configuration items which are not so apparent, for others, such as restart, depends_on etc., please find out for yourself.
- services:mysql_db:volumes:; recall this post which I mention earlier Docker on Windows 10: running mysql:8.0.30-debian with a custom config file. The same bind --mount is used in Compose, syntax translated according to this Docker document Use a bind mount with compose. This enables my Compose to use the existing database / volumes, and the existing MySQL custom configuration file.
- volumes:mysqlvol:external:true; recall --mount source=mysqlvol,target=/var/lib/mysql? mysqlvol is the directory on the host machine, Windows 10 Pro using WSL2 in my case, where Docker container data is stored: I've written about this in Docker volumes on disk. Referencing it and setting external to true in this case to signify that this volume has been created outside of Compose, please see the official document Compose reference on volumes | external.
- app:ports:'8000:8000' ( i.e. host port:container port ); -- this enables accessing the Dockerised site as http://localhost:8000, without it, the next command would not work:
app:command:flask run -h 0.0.0.0 -p 8000; placing this command here is my own guess works, I've not yet found any documentation on this, I am not sure if this will always work.I'm facing two ( 2 ) problems at this point:
- If I don't have this command here, the application container will not be able to start properly, it will just sit on the last command: ./wait-for-it.sh -t 40 mysql_db:3306 -- python ./app.py -- and keeps on restarting endlessly.
- Before this command, I've tried several others as seen in the commented out ones, none of them allows connecting to the application container as http://localhost:8000, even though the container was running. It seems that this is a popular “problem”, and I have yet come across a concrete answer for it, different solutions seem to work for different situations...
And please note again, the “wait for” implementation of this section is not mine -- I am merely reproducing the implementation quoted in the Reference Documents, Tutorials and Postsabove.
I did pull the atkrad/wait4x Docker image manually before running docker-compose, but I don't think that is necessary:
docker pull atkrad/wait4x
# syntax=docker/dockerfile:1 FROM python:3.10.5-slim-buster WORKDIR /book_keeping COPY . . EXPOSE 8000 RUN /usr/local/bin/python -m pip install --upgrade pip \ && pip3 install -e . \ && pip3 install bh_utils-1.0.0-py3-none-any.whl \ && pip3 install bh_validator-1.0.0-py3-none-any.whl RUN rm bh_utils-1.0.0-py3-none-any.whl \ && rm bh_validator-1.0.0-py3-none-any.whl \ && mv .env-docker .env CMD [ "python", "-m" , "flask", "run", "--host=0.0.0.0" ]
version: "3.9" services: mysql_db: image: mysql:8.0.30-debian cap_add: - SYS_NICE restart: always environment: - MYSQL_DATABASE=ompdev1 - MYSQL_ROOT_PASSWORD=pcb.2176310315865259 ports: - '3306:3306' volumes: - type: bind source: //e/mysql-config target: /etc/mysql/conf.d - type: volume source: mysqlvol target: /var/lib/mysql app: container_name: book-keeping restart: always build: . image: book-keeping depends_on: wait-for-db: condition: service_completed_successfully ports: - '8000:8000' command: flask run -h 0.0.0.0 -p 8000 wait-for-db: image: atkrad/wait4x depends_on: - mysql_db command: tcp mysql_db:3306 -t 30s -i 250ms volumes: mysqlvol: external: true
The “wait for” command is:
command: tcp mysql_db:3306 -t 30s -i 250ms
I would prefer this method rather than the other one, this tool seems to be actively maintained, more than half a million downloads. And most importantly, I don't have to carry around another additional script. I don't think it adds much to the final image size either.
- Synology DS218: unsupported Docker installation and usage... -- Synology does not have Docker support for AArch64 NAS models. DS218 is an AArch64 NAS model. In this post, we're looking at how to install Docker for unsupported Synology DS218, and we're also conducting tests to prove that the installation works.
- Python: Docker image build -- install required packages via requirements.txt vs editable install. -- Install via requirements.txt means using this image build step command “RUN pip3 install -r requirements.txt”. Editable install means using the “RUN pip3 install -e .” command. I've experienced that install via requirements.txt resulted in images that do not run, whereas using editable install resulted in images that do work as expected. I'm presenting my findings in this post.
- Python: Docker image build -- “the Werkzeug” problem 🤖! -- I've experienced Docker image build installed a different version of the Werkzeug dependency package than the development editable install process. And this caused the Python project in the Docker image failed to run. Development editable install means running the “pip3 install -e .” command within an active virtual environment. I'm describing the problem and how to address it in this post.
- Python: Docker image build -- save to and load from *.tar files. -- We can save Docker images to local *.tar files, and later load and run those Docker images from local *.tar files. I'm documenting my learning experimentations in this post.
- Python: Docker volumes -- where is my SQLite database file? -- The Python application in a Docker image writes some data to a SQLite database. Stop the container, and re-run again, the data are no longer there! A volume must be specified when running an image to persist the data. But where is the SQLite database file, in both Windows 10 and Linux? We're discussing volumes and where volumes are on disks for both operating systems.
- Docker on Windows 10: running mysql:8.0.30-debian with a custom config file. -- Steps required to run the official mysql:8.0.30-debian image on Windows 10 with custom config file E:\mysql-config\mysql-docker.cnf.
- Docker on Windows 10: mysql:8.0.30-debian log files -- Running the Docker Official Image mysql:8.0.30-debian on my Windows 10 Pro host machine, I want to log all queries, slow queries and errors to files on the host machine. In this article, we're discussing how to go about achieving this.
- pgloader Docker: migrating from Docker & localhost MySQL to localhost PostgreSQL. -- Using the latest dimitri/pgloader Docker image build, I've migrated a Docker MySQL server 8.0.30 database, and a locally installed MySQL server 5.5 database to a locally installed PostgreSQL server 14.3 databases. I am discussing how I did it in this post.
Thank you for reading... And I hope you found this post useful. Stay safe as always.