1. The problem
This morning, I had a problem: my server, a Raspberry Pi 2 bought in 2016, no longer starts. I used this old machine to host many services (WireGuard, Nextcloud, Bitwarden, ODPS server, etc.).
After a few unsuccessful attempts, I gave up on the idea of repairing it and decided to use another one of my servers instead. I unplugged the external hard drive from my Raspberry Pi and plugged it into my other server.
And within 10 minutes, all the services on the new machine were running just like they did on the Raspberry Pi, with all the data intact. How did I do that?
Simply:
$ docker compose up -d
2. How I Started
I’ve been an almost exclusive Linux user for about fifteen years now on my laptop, starting with Ubuntu/Debian and then Arch, just to show off a bit. But I'm not an advanced user—I mostly do pretty basic stuff. I only use the shell when I have an issue or for my work as a JavaScript developer.
Of course, like many, I bought a few Raspberry Pis back in the day, but it was mostly a toy. Even though I did manage to install a private Git server and stream my music library using MPD.
But it's only recently that I started systematically installing a lot of services: Nextcloud, PhotoPrism, Bitwarden, and many more...
Installing each of these services can be long and tedious, for example, installing Nextcloud. Even if everything goes well, it won't take just 10 minutes :)
And after that, you’ll still need to retrieve the old data for Nextcloud, for example:
$ cp -r /mnt/storage/nextcloud/var/www/html /new_nextcloud_dir
Then you’ll need to redo the post-installation configuration, create accounts, and repeat this process for every service, each with its own way of working.
Docker and Docker Compose greatly simplify this process.
3. How It Works
On my old Raspberry Pi, I had many directories on my external drive like these:
old@server:/mnt/seagate$ ls nextcloud/ photoprism/ bitwarden/ wireguard/ odps/
All these directories are structured more or less the same way. For example, Nextcloud:
old@server:/mnt/seagate/nextcloud$ ls
data db docker-compose.yml nextcloud.sql redis
The file we’re going to modify here is docker-compose.yml. The files or directories are generated by the containers and contain the data we generated while using the service.
In general, I don't write the docker-compose.yml files myself. Most projects have one, or if not, you can usually find someone who has made one.
# docker-compose.yml file
services:
nc:
image: nextcloud:apache
environment:
- POSTGRES_HOST=db
- POSTGRES_PASSWORD=nextcloud
- POSTGRES_DB=nextcloud
- POSTGRES_USER=nextcloud
- REDIS_HOST=redis
ports:
- 4080:80
restart: always
volumes:
- ./data:/var/www/html # I only modify these lines
depends_on:
- redis
- db
db:
image: postgres:15-alpine
environment:
- POSTGRES_PASSWORD=nextcloud
- POSTGRES_DB=nextcloud
- POSTGRES_USER=nextcloud
restart: always
volumes:
- ./db:/var/lib/postgresql/data # I only modify these lines
expose:
- 5432
redis:
image: redis:alpine
restart: always
volumes:
- ./redis:/data # I only modify these lines
expose:
- 6379
Docker containers are isolated processes that share the host operating system’s kernel but run in compartmentalized environments (called "namespaces") that separate them from the rest of the system and from other containers. In terms of communication with the outside world, containers can be configured to interact via networks, but by default, they are isolated.
Regarding data persistence, by default, a Docker container does not retain data once stopped or deleted because anything written to its internal filesystem is ephemeral. To persist data beyond the lifecycle of a container, you must explicitly mount volumes or directories. This allows you to save data in the host’s filesystem or on external storage.
When defining a volume or directory mount, the left-hand path (in Docker syntax) specifies the location of files on the host machine, and the right-hand path indicates where these files will be accessible inside the container.
The modification I make simply tells Docker that the data will always be in the current directory, which is itself located on my external drive.
This allows all the necessary components to keep our services running to be located in the same places on my external drive: configuration and data.
This clear separation between data and application is what I truly appreciate about Docker.
It’s what allows me to unplug this hard drive and mount it elsewhere. If Docker Compose is already installed on the machine, I just do:
new@server:$ cd /mnt/seagate/nextcloud
new@server:/mnt/seagate/nextcloud$ docker compose up -d
new@server:$ cd /mnt/seagate/bitwarden
new@server:/mnt/seagate/bitwarden$ docker compose up -d
new@server:$ cd /mnt/seagate/wireguard
new@server:/mnt/seagate/wireguard$ docker compose up -d
And that’s it! You find your setup exactly as you left it. I really mean everything: the database connections, users, the latest changes—it all feels like you never switched machines! Even the Redis cache is restored to the state it was in the last time the service ran on my Pi.
I find this really amazing. The hype around Docker was definitely not exaggerated. I even dream that all the software I use could work this way, even on my laptop.
For the record, since I’m too lazy to go into each directory manually, I wrote a little script to do it for me:
# Find all directories that are exactly one level deep and contain a docker-compose.yml file
for dir in $(find . -mindepth 2 -maxdepth 2 -type f -name "docker-compose.yml" -exec dirname {} \;); do
echo "Entering directory: $dir"
cd "$dir"
# Start docker compose in the current directory
docker compose up -d
done
However, it’s worth noting that this isn’t always necessary at startup if you’ve selected the restart: always option in your Docker Compose configuration. The Docker daemon itself takes care of reviving all services when the server starts.
4. Updating is Even Simpler
I didn’t mention it earlier, but when you run docker compose up -d for the first time, three commands are actually executed:
$ docker compose pull
$ docker compose build
$ docker compose start
In subsequent runs, it’s equivalent to just a start.
If you want to update your container, you first need to change the version number, or if you like living dangerously, you can leave the latest tag next to your image name, so it always fetches the latest image.
To update:
$ docker compose pull
$ docker compose up -d # It restarts the container only if `docker pull` found a newer image.
5. Conclusion
Honestly, I don’t know what you think, but I find this approach simple, elegant, and ultra-convenient.
There are thousands of Docker images available, and the principle is always the same.
WordPress? Docker Compose. A CMS? Docker Compose. Video or audio streaming? Docker Compose.
So, next time you think you might need a SaaS solution, just try this small reflex: search for my problem self-hosted in your favorite search engine.
If you find something that fits your needs, look for the docker-compose.yml, make the necessary modifications for the volumes, and the world is yours!
I recommend this site, which lists an incredible number of services that can be installed simply with docker compose.
We’ll discuss later how to access these services; most of the time, I prefer using a VPN, as it avoids exposing my services on the internet. Of course, WireGuard installs in a snap with its own docker compose (though you’ll need to forward UDP ports from your router to your instance). You even get a nice web interface with authentication and the ability to generate a QR code for each client as a bonus.
Or, if you’re using a VPS, I can show you how to associate each service with a domain. It’s really simple, although it wasn’t for me until late 2022.
Until then, see you soon, and thanks for reading this far.
Top comments (0)