Docker compose is great! But sometimes it can be useful to unravel a docker-compose.yml
file and run each container individually.
We run a hosting platform for containers, but do not support Docker compose yet. We get lot's of inquiries about how to deploy Docker compose setups, so let me walk you through the process of deconstructing a Docker compose file, so you can run each container individually.
In a nutshell:
Running Docker compose setups as standalone apps requires you to first identify all services that are defined and all resources that each individual service uses, like volumes, networks and environment settings. Start with recreating these resources individually and once everything is setup, derive docker build ...
and docker run ...
commands for each service.
If you deploy to the cloud and use a platform like Sliplane, it's usually enough to just provide the config for each service.
Let’s take a closer look at the process.
Basic Docker Concepts
Before we get started, you need to understand the most important Docker concepts. If you are just starting with Docker, I recommend you checkout the official documentation from the Docker getting started guide below, especially try to understand:
- Images
- Containers
- Port mapping
- Persisting data in volumes
- Multi-container setups and networks and
- Configuring containers with environment variables.
Docker is a lot more than that, but if you understand these basic concepts, it will be much easier to read and understand Docker compose files and you will be able to get most setups running!
How to run services from a docker-compose.yml file individually
Let's look at an example docker compose file, try to deconstruct it and run the setup as individual containers:
version: "3.9" # Specify the version of Docker Compose being used
services: # Define the services (containers) that make up your app
app: # Service name: app
image: my-app:latest # The image to use for the app service (latest version)
container_name: my_app_container # Custom name for the container
build:
context: ./app # The build context (directory for the Dockerfile)
dockerfile: Dockerfile # The Dockerfile to build the image from
args:
- APP_ENV=production # Build argument for setting environment variable
environment:
- APP_ENV=production # Set the environment variable in the container
- APP_SECRET=mysecret # Another environment variable for the app
ports:
- "8080:80" # Map port 8080 on the host to port 80 on the container
volumes:
- app-data:/data # Volume mount for persistent app data
networks:
- shared-network # Connect the app to a shared network
depends_on:
- db # Ensure the db service starts before the app service
restart_policy:
condition: on-failure # Restart the app if it fails
labels:
com.example.description: "My web app" # Custom label to describe the app
db: # Service name: db
image: postgres:16 # Use the official PostgreSQL 16 image
container_name: my_db_container # Custom name for the database container
environment:
POSTGRES_PASSWORD: password # Set the PostgreSQL password
volumes:
- db-data:/var/lib/postgresql/data # Volume mount for database data persistence
networks:
- shared-network # Connect the db to the shared network
ports:
- "5432:5432" # Map PostgreSQL default port 5432 from container to host
volumes:
app-data: # Named volume for app data persistence
db-data: # Named volume for database data persistence
networks:
shared-network: # Define a shared network for both services to communicate
The structure of Docker compose files is described in the official compose file reference. In this example, there are 4 top level options:
version
services
-
volumes
and -
networks
.
version
just describes the version of Docker compose that is used, we won't need it in this tutorial.
The second top level option, services
, is the most important here. It describes configuration for each container. In the compose file above there are 2 services: app
and db
, each represented as a key within the services
block.
But before we can run each service individually, we need to do some prep work.
You can see that both service blocks contain a volumes
and networks
section. The app
service for example, uses a volume app-data
which will be mounted to /data
and a network called shared-network
. The db
service uses a volume called db-data
which will be mounted to /var/lib/postgresql/data
.
All volumes and networks, that can be referenced by services are described in block 3 and 4 of the compose file. Before we can run our containers, we need to setup the two volumes app-data
, db-data
and our network shared-network
.
# create the shared network that is used to connect containers
docker network create shared-network
# create volumes for the containers to persist data
docker volume create app-data
docker volume create db-data
Now that everything is in place, we craft a run command for the first container. In the compose file you can see, that app
depends_on
db
, which means, we need to run the db
container first:
docker run \
-e POSTGRES_PASSWORD=password \ # Environment translates to -e flags or --env-file
-v db-data:/var/lib/postgresql/data \ # Mount the db-data volume
--network shared-network \ # Attach the container to the shared network
-p 5432:5432 \ # Map host port 5432 to container port 5432
-d \ # Run in detached mode
postgres
Note that I intentionally did not use all settings that were described in the docker-compose.yml
file above. In this case, I simply skipped the container_name
option. It would be possible to use the --name my_db_container
flag, in order to tag the container. However, I wanted to show, that often times docker compose setups can be run, without copying every option. It's important to distinguish required settings from optional ones and depending on your use case you might also be able to run a simpler version of the compose setup that is sufficient for your use case.
At the same time, I introduced a new flag -d
to the run command, which is not explicitly described in the compose file. Running containers individually gives you some flexibility. You could also define environment variables in a .env
file for example and use the --env-file
flag in your run command to access them.
Let's look at the app
service next. The service contains a build
block in the compose file. This means, that we might need to build our image first, before we can run a container from it.
docker build \
--build-arg APP_ENV=production \ # Build time configuration
-t app \ # Tag the image
./app \ # Define the build context (location that acts as the root for subsequent Docker commands)
If your build succeeded you can run the app image with:
docker run -d \
--name my_app_container \ # Custom container name
-p 8080:80 \ # Port mapping
-e APP_ENV=production \ # Environment variable
-e APP_SECRET=mysecret \ # Another environment variable
-v app-data:/data \ # Volume mount for persistent data
--network shared-network \ # Connect to shared network
--restart on-failure \ # Restart policy
--label com.example.description="My web app" \ # Custom label
my-app:latest
Again, you could get this app running with a different restart policy or without the label for example.
The most important settings to consider, when trying to get a compose setup running are:
- port mappings
- volume mounts
- networks and
- environment variables
Other settings are possible, but often times not necessarily required.
Summary
Deconstructing a Docker compose file and running containers individually involves identifying the services, networks, volumes, and environment settings defined within the compose file.
After setting up these Docker objects you can derive the necessary docker build and run commands for each service.
While some options can be omitted, key elements such as port mappings, volume mounts, and environment variables are critical to replicate the compose setup.
Check out Sliplane for a simple way to deploy docker containers.
Top comments (3)
Nice, what about lifecycle hooks?
If you are talking about post start and pre stop, there is no CLI equivalent unfortunately