Photo by Wil Stewart on Unsplash
You know how I love Docker and try to move everything inside containers. But a couple days ago I got a system warning that I have about 1 Gb left on the root partition. That was a surprise. At the time I was in a Zoom call. And the first thing that came to my mind was "eh, must be Zoom eating up space". But in the span of 30 minutes 1 Gb turned to 100 Mb. So I panicked a little, turned everything off, rebooted and got angry that it was still less than 100 Mb on /
.
Imagine my amazement when I discovered that /var/lib/docker/
was eating around 20 Gb! What could it be? Turns out it was.. logs! Tons of logs from all the containers I have. And I only have around 10 running daily. Plus 5 more that I spin up when I need them. Such as Elastic, Kafka and Logstash.
Thankfully, the solution was quite easy:
sudo sh -c "truncate -s 0 /var/lib/docker/containers/*/*-json.log"
This will remove all the logs. That alone freed me 16 Gb! My-my...
But I don't want to run this command with Crontab. Or, even worse, run it manually from time to time. There must be a way to set Docker logging policy, I thought. And there sure is. Official documentation says how to configure logging and set the max number of files with their size. So, now my /etc/docker/daemon.json
looks like this:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "1",
"env": "os,customer"
}
}
On a MacOS it's even easier. Go to Docker Desktop -> Preferences -> Docker Engine
and edit the configuration.
Meaning I'll have only one file under 100Mb. Which I think should be suffice for the logs.
But, this will not affect already created containers! You'll need to recreate every single one in order for them to respect the new logging config.
Hope this will save you some time and nerves if/when you'll encounter a similar issue. So you can just focus on writing quality code 😊
Top comments (6)
I have created that daemon.json file, added those options and restarted the containers. Now after two days one of those containers is already at 163 MB again.
You need to recreate them, not restart. Meaning, remove existing one, and create a new one. Don't loose your data tho.. if those containers don't have a persistent volume
Sorry, I was just tired when I wrote that comment. I meant recreated.
Well.. if you use Portainer, you could try to "duplicate/edit" on the container page, make sure that logging driver is set to json (or whatever you've set up in your daemon) and deploy. Or try to completely delete and create the container from cli. Not sure what's going wrong there, but it works for me. Maybe logging policy is not setting up, or.. no idea honestly
What is the equivalent way for Mac?
If/when I'll have access to a Mac I'll update the article. But for now - Idk, sry 🤷