TL;DR
This article provides a detailed guide on validating and testing Docker Compose setups using GitLab CI. The focus here is not on testing individual services (there are other tools for that) but rather on ensuring the functionality of the entire project itself. If you're working with microservices managed via Docker Compose and need a dependable approach to validate your configuration within a CI/CD pipeline, then this is for you.
You can find the final code in the GitLab repository.
Table Of Contents
- What It is All About
- Preparing Docker Compose
- Validating Docker Compose
- Testing Docker Compose
- Optimizing Pipeline Execution
- Summary
What It is All About
Once upon a time I was needed to structure one of the projects I’m working on in such a way that any developer could easily set it up locally and configure their own project to integrate with it. This repository was logically named Integration Kit, with a Docker Compose file at its core. And whenever microservices in the project were updated, the changes were automatically reflected in this Docker Compose setup.
I may cover the specifics of how this repository is organized in another blog, but the most challenging part was related to applying Keycloak configuration and running data migrations for that environment. The main problem was that it was too easy to forget to update migration data when one of the services was updated. That's why I decided to set up continuous testing and validation to ensure that all services would successfully start after the data migration process.
The main goals were:
- Validate Docker Compose setup to ensure it’s error-free and doesn’t contain vulnerabilities.
- Verify that all services start correctly after the data migration.
- Perform these checks periodically and alert on any issues.
To demonstrate this strategy, I created a small repository on GitLab, and I’ll explain the steps using this example.
Accordingly, I’m using GitLab CI as the CI platform, though I believe this approach can be adapted to other CI/CD platforms as well (if you'd like me to provide examples for other systems, let me know in the comments).
Preparing Docker Compose
The project will include two Compose files:
- One will define all the project’s services.
- The other will define a service specifically for data migration.
For demonstration purposes, I’ll create "dummy" services that simulate the behavior of real ones. For instance, the sleep
command in the migration service will emulate the time it would take to run a real data migration process.
# docker-compose.yml
services:
api:
image: thomaspoignant/hello-world-rest-json:latest
ports:
- '8000:8080'
db:
image: postgres:17-alpine3.20
volumes:
- postgres_data:/var/lib/postgresql/data
env_file: ./envs/.env.postgres
frontend:
image: lipanski/docker-static-website:2.4.0
volumes:
- ./configurations/frontend/index.html:/home/static/index.html
ports:
- '8080:3000'
volumes:
postgres_data:
driver: local
# docker-compose.migration.yml
services:
data-importer:
image: alpine:3.19
volumes:
- ./configurations/service/fake_migration_data.json:/tmp/data.json
command: sh -c "cat /tmp/data.json && touch /tmp/flag && sleep 10 && echo \"Migration passed\""
Dependencies
In real projects, some services always depend on others and need to wait for them to be fully up and running before starting. Docker Compose provides the depends_on
instruction to manage the order of service startup.
For example:
- Data migrations must wait until the database is ready.
- The frontend depends on the API being up and running.
To achieve this, each service needs a properly configured health check, and the depends_on
option should specify that a service will only proceed once its dependencies are healthy.
With the
depends_on
attribute, you can control the order of service startup and shutdown. It is useful if services are closely coupled, and the startup sequence impacts the application's functionality.
service_healthy
specifies that a dependency is expected to be "healthy" (as indicated byhealthcheck
) before starting a dependent service.
Here’s an example of how this can be set up:
# docker-compose.yml
services:
api:
...
healthcheck:
test: ["CMD-SHELL", "wget -q --spider http://localhost:8080/ || exit 1"] # or any other way to test that it is working
interval: 10s
timeout: 5s
retries: 3
depends_on:
db:
condition: service_healthy
db:
...
healthcheck:
test: [ "CMD-SHELL", "pg_isready -U exampleuser -d exampledb" ]
interval: 10s
timeout: 5s
retries: 5
frontend:
...
depends_on:
api:
condition: service_healthy
View docker-compose.yml on GitLab
# docker-compose.migration.yml
services:
data-importer:
...
depends_on:
db:
condition: service_healthy
View docker-compose.migration.yml on GitLab
Data Migration
To execute the data migration process, combine both Compose files and run the migration service. The command looks like this:
docker compose -f docker-compose.yml -f docker-compose.migration.yml run --rm data-importer
In this case, command starts the migration service, which uses depends_on
to wait until the database is ready. Once the migration is complete, the service will output: "Migration passed."
Pro Tip
If you're using custom GitLab Runners, you might encounter issues with volumes because of relative paths, depending on the runner configuration. To avoid this, you can define volumes in your Docker Compose file like this:
service-name:
volumes:
- ${DOCKER_MOUNT_POINT}/path/to/config.ext:/container/path/to/config.ext
Then, for your CI configuration, specify the full path to your project files:
variables:
DOCKER_MOUNT_POINT: /builds/$CI_PROJECT_PATH
For local development, you can create a .env
file with the following content:
DOCKER_MOUNT_POINT=.
This ensures compatibility between local and CI environments without changing the Compose file.
Validating Docker Compose
It's time to set up validations in the CI pipeline. To do it I use Docker Compose Config Command and DCLint.
Docker Compose Config Command
docker compose config
renders the actual data model to be applied on the Docker Engine. It merges the Compose files set by the-f
flags, resolves variables in the Compose file, and expands short notations into the canonical format.
This job will validate the combined Docker Compose files to ensure they are syntactically correct and error-free.
codequality.compose-config:
stage: codequality
image: docker:25.0-git
needs: []
script:
- docker compose -f docker-compose.yml -f docker-compose.migration.yml config -q
DCLint
Docker Compose Linter (DCLint) is a utility designed to analyze, validate, and fix Docker Compose files. It helps identify errors, style violations, and potential issues in Docker Compose files, ensuring your configurations are robust, maintainable, and free from common pitfalls.
This job will use DCLint for validation and generate a code quality report for GitLab.
codequality.dclint:
stage: codequality
image:
name: zavoloklom/dclint:alpine
entrypoint:
- ''
needs: []
script:
- /bin/dclint . -r -f codeclimate -o gl-codequality-dclint.json
artifacts:
when: always
paths:
- gl-codequality-dclint.json
reports:
codequality: gl-codequality-dclint.json
expire_in: 1 month
Despite the fact that DCLint can validate the syntax of Compose files against a schema, it does not support validation of merged Compose files (as of December 2024). For this reason, it’s better to use both tools. However, if your project only has a single Compose file, you can skip the docker compose config
command.
Testing Docker Compose
During the tests stage the idea is to run docker compose up
to ensure all containers start and run as expected. But it's not that simple, so let’s break the testing process down into steps:
-
Run Data Migration
docker compose -f docker-compose.yml -f docker-compose.migration.yml run --rm data-importer
Execute the migration service to prepare the environment.
-
Pull Images Separately
docker compose pull -q
Download all required images to avoid hitting any rate limits during
docker compose up
. -
Build Custom Images (if needed)
docker compose build -q
If your project has custom images, build them in a separate step to ensure resources are allocated efficiently and avoid potential rate limits.
-
Start the Project
docker compose up -d
Use
docker compose up
to start all services. -
Wait for Stability
sleep 10
Wait for at least 10 seconds (or more) to ensure that the services not only start but also remain stable after running for a short period.
-
Verify Service Statuses
if docker compose ps --all --filter status=exited 2>/dev/null | grep -q "Exit"; then echo "Error: Not all containers are running" >&2; docker compose ps --all; docker compose down; exit 1; else echo "All containers are running."; fi
Check that no containers have exited or are in an error state.
-
Shut Everything Down
docker compose down
Bring the project down cleanly to release resources and leave the environment ready for the next run.
This testing strategy ensures not just the startup of your services but also their short-term stability, giving confidence in the deployment setup.
CI Pipeline
Here’s GitLab CI pipeline with these steps:
tests.compose:
stage: tests
image: docker:25.0-git
needs: []
services:
- docker:25.0-dind
script:
- docker compose -f docker-compose.yml -f docker-compose.migration.yml run --rm data-importer
- docker compose pull -q
- docker compose build -q
- docker compose up -d
- sleep 10
- |
if docker compose ps --all --filter status=exited 2>/dev/null | grep -q "Exit"; then echo "Error: Not all containers are running" >&2; docker compose ps --all; docker compose down; exit 1; else echo "All containers are running."; fi
- docker compose down
Optimizing Pipeline Execution
Everything needed for testing is now in place, but there are opportunities to improve the pipeline further.
Skip Stages
You can configure the pipeline to skip certain stages based on environment variables. For instance:
If SKIP_CODEQUALITY
variable is set to true
, skip code quality checks:
codequality.compose-config:
rules:
- if: $SKIP_CODEQUALITY == "true"
when: never
- when: on_success
Also you can configure the tests to run manually for merge requests, as they can be time-consuming and aren't always necessary before merging. Or you can only trigger tests when Compose files change:
tests.compose:
stage: tests
image: docker:25.0-git
rules:
- if: $SKIP_TESTS == "true"
when: never
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
# Option 1
changes:
- docker-compose*.yml
# Option 2
when: manual
allow_failure: true
- when: on_success
Scheduled Pipelines
Daily checks can proactively catch potential issues caused by external changes, such as updated dependencies or environment changes, before they impact your systems. Here's how to do it with Pipeline Schedules:
- Navigate to CI/CD → Pipeline schedules in your project.
- Create a schedule with the desired frequency. For daily checks at midnight, use the cron pattern
0 0 * * *
. (See GitLab's cron documentation for more details.) -
Add environment variables to skip unnecessary steps, such as:
SKIP_CODEQUALITY=true
This skips the codequality stage because it won't be any syntax changes in your
docker-compose.yml
.
Notifications About Issues
By default, if a pipeline fails, GitLab sends an email notification. However, if you already use another system for alerts (like we do with Slack), you can add a job to the pipeline for sending custom notifications.
Here's an example of such a job:
alerts.notification:
stage: alert
image: alpine:3.19
script:
- echo "Pipeline Failed. Sending Message."
rules:
- if: $CI_PIPELINE_SOURCE == "schedule" || $CI_PIPELINE_SOURCE == "trigger"
when: on_failure
- when: never
The job runs only when the pipeline was triggered by a schedule or an external trigger (e.g., API call) and it executes only if something fails in the pipeline.
You can replace the echo
command with a script that sends a webhook to your preferred notification platform, such as Slack, Microsoft Teams, or any other service.
This ensures you are promptly informed about pipeline failures, allowing for quick responses to issues.
Summary
Thanks to this approach, I was able to ensure the stability of the Integration Kit, reduce the number of developer complaints about issues during setup, and quickly respond to failures related to data migrations. This automated solution not only saves time but also boosts confidence in the reliability of the entire system.
The final pipeline and example configurations can be found in the GitLab repository. If you have questions for adapting this strategy to other CI/CD platforms, feel free to reach out or leave a comment!
If you liked this article, you can support me with PayPal and Buy Me a Coffee or follow me in:
Top comments (0)