DEV Community

Rene Hernandez
Rene Hernandez

Posted on • Originally published at bitsofknowledge.net on

Faster deployments of mysql databases in k8s

Since I started at my new job, I have been inmersed in learning about a whole lot of new things, including Kubernetes and cloud. My first task, very challenging at the time, was to optimize certain part of our CI pipeline.

What exactly? Read below to find out.

The problem #

As part of our CI pipeline, we provide our team with the ability to generate reviews environments with a one-click deployment step. These review environments provide a production-like environment for developers to test their changes in the main application. The main difference between the review environments and production environment is that all the dependent systems are configured using containers running in the cluster and hold comparatively smaller datasets versus production.

The problem we were facing at the time, was that review environments were taking a long time to start, almost reaching 10 minutes from when the developer started the deployment up until the application was running and ready to use. The culprit? The mysql container, which was taking the bulk of time loading a database dump on startup and thus the application was left hanging until the database was ready to use.

With this problem at hand, we looked at several options to better the performance and finally settled on trying to eliminate the dump loading step as a runtime stage during the deployment.

The Solution #

As mentioned above, we decided that the best approach would be to move the dump expansion out of the database initialization stage and instead, process the dump during the image build creation. That way, we would pay the price (in time spent expanding the dump and loading into the database container) just once.

With this idea in mind, there was one requirement that we needed to comply with:

  • Updates of review environments reuse the same database pod to preserve custom data that developers may have created during their testing efforts. That meant, we should only copy the expanded database on the first startup.

Custom mysql container with loaded data #

Our first attempt was to produce a custom mysql image with the dump already loaded in. After several iterations on how to achieve this, we finally settled on the following multi-stage Dockerfile shown below:

FROM mysql:5.7 as builder

RUN apt-get update && apt-get upgrade -y

# That file does the DB initialization but also runs mysql daemon, by removing the last line it will only init
RUN ["sed", "-i", "s/exec \"$@\"/echo \"not running $@\"/", "/usr/local/bin/docker-entrypoint.sh"]

ARG dump
ARG MYSQL_DATABASE
ARG MYSQL_USER
ARG MYSQL_PASSWORD
ARG MYSQL_ROOT_PASSWORD

ENV MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD MYSQL_DATABASE=$MYSQL_DATABASE MYSQL_USER=$MYSQL_USER MYSQL_PASSWORD=$MYSQL_PASSWORD

COPY ${dump} /docker-entrypoint-initdb.d

# Need to change the datadir to something else that /var/lib/mysql because the parent docker file defines it as a volume.
# https://docs.docker.com/engine/reference/builder/#volume :
# Changing the volume from within the Dockerfile: If any build steps change the data within the volume after
# it has been declared, those changes will be discarded.
RUN ["/usr/local/bin/docker-entrypoint.sh", "mysqld", "--datadir", "/initialized-db"]

FROM mysql:5.7

COPY --from=builder ./initialized-db /var/lib/mysql/

As we can see, the first stage loads the database dump into mysql. Since we initially modify the entrypoint script for the image, it won't start the mysql process, just the initialization process of expanding the dump. Then, the second stage generates the final image, by copying the generated database files from the first stage to the /var/lib/mysql folder in the second stage.

For my initial testing with the resulting image, I was using a docker-compose deployment to simulate the kubernetes environment and results were promising. The startup time for the mysql container had been reduced significantly and the application was able to connect to the backend successfully. Then, I went to try deploying this custom mysql image with the mysql chart in kubernetes and it failed to start the corresponding mysql pod.

The reason for this failure is due to the difference on how docker treats volumes vs kubernetes, which I wasn't aware beforehand. In Docker, if the volume is empty, then the data on the container (/var/lib/mysql in this scenario), will be copied to the corresponding volume. On the other hand, Kubernetes will override whatever is in th container at the path where the Persistent Volume is being mounted, so I couldn't get the Mysql container up and running populated with the expanded data.

After several iterations and analysis, I landed in a feasible solution using init containers, which I dive into below.

Init container to load data #

Init containers run before the main application containers start in a pod and they are usually used to perform one-off tasks required before the main application boots. Below, is the Dockerfile definition for the image that will run as init container.

FROM mysql:5.7 as builder

RUN apt-get update && apt-get upgrade -y

# That file does the DB initialization but also runs mysql daemon, by removing the last line it will only init
RUN ["sed", "-i", "s/exec \"$@\"/echo \"not running $@\"/", "/usr/local/bin/docker-entrypoint.sh"]

ARG dump
ARG MYSQL_DATABASE
ARG MYSQL_USER
ARG MYSQL_PASSWORD
ARG MYSQL_ROOT_PASSWORD

ENV MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD MYSQL_DATABASE=$MYSQL_DATABASE MYSQL_USER=$MYSQL_USER MYSQL_PASSWORD=$MYSQL_PASSWORD

COPY ${dump} /docker-entrypoint-initdb.d

# Need to change the datadir to something else that /var/lib/mysql because the parent docker file defines it as a volume.
# https://docs.docker.com/engine/reference/builder/#volume :
# Changing the volume from within the Dockerfile: If any build steps change the data within the volume after
# it has been declared, those changes will be discarded.
RUN ["/usr/local/bin/docker-entrypoint.sh", "mysqld", "--datadir", "/initialized-db"]

FROM alpine:3.7

RUN apk add --no-cache bash

COPY --from=builder ./initialized-db /mysql_data

WORKDIR /script

COPY scripts/copy_data_to_volume.sh ./

CMD ["bash", "copy_data_to_volume.sh"]

The main differences between this Dockerfile and the previous one are:

  • Instead of creating a mysql image, it creates an alpine image running a bash script on startup
  • It copies the mysql files after expanding the dump into a /mysql_data folder instead of the mysql location in the mysql container.

The copy_data_to_volume.sh script, as shown below, takes care of copying the data from the /mysql_data folder to a destination folder called /initialized-db, as long as the .data-initialized flag file is not present. This /initialized-db folder should be mounted as the data volume in the pod where the init container is running.

if [[-f "/initialized-db/.data-initialized"]]; then
  echo "DATABASE already initialized. Nothing else left to do"
  exit 0
fi

echo "Copying seeded database (one-time operation)"
cp -a /mysql_data/. /initialized-db

echo "Create file to mark first-time initialization completed"
touch /initialized-db/.data-initialized

The mysql chart supports specifying init containers through the extraInitContainers field, as shown in the following code section. As I mentioned above, we are mounting the data volume in the /initialized-db folder within the init container. And this is what makes the process work. That same data volume will be mounted by the main mysql container and it will have the already present on startup

mysql:
  image: mysql
  imageTag: 5.7
  extraInitContainers: >-
    - name: seed-database
      image: 
      volumeMounts:
        - name: data
          mountPath: "/initialized-db"

Results #

After deploying the new version of the helm chart supporting this feature, we saw a reduction of the startup time from around 10 minutes to less than 3 minutes, which means that we gave developers back 7 minutes of productivity :).

Conclusions #

To sum up, we discussed how to improve the startup performance of the mysql chart deployment by off-loading the process of expanding the sql dumps into the database to a dedicated data image. We discussed 2 solution attempts, the first one being the deployment of a mysql container with the dump already expanded. That solution wasn't feasible in a kubernetes environment, and we moved to the second solution attempt, which reused the same idea of expanding the dump, but instead deployed it as part of an init container running on the mysql helm chart.

To conclude, thanks you so much for reading this post. Hope you enjoyed reading it as much as I did writing it. See you soon and stay tuned for more!!

Top comments (0)