Docker is a great tool for deploying web applications, but only if you use it the way it's intended. Thereโs a lot of potential to shoot yourself into the foot. So if you want to save yourself from painful debugging hours, make sure you avoid these common mistakes.
Side note: If you want to make your life a bit easier, check out sliplane.io. It's a simple Docker hosting solution to help you avoid some of the pitfalls mentioned here.
1. Not Setting Resource Limits โ
This one is especially important if you are running multi container setups on a single server. Greedy services can grab all the CPU or memory on your machine, leaving other containers with no resources to play with, or worse case: freeze the entire server.
Use the --cpu-quota
and --memory
flags to limit resource usage of containers at runtime:
docker run --memory="512m" --cpu-quota=50000 your-image-name
You can find more information on these flags in the official Docker documentation on resource constraints.
If you can, avoid building your images on the same server where your containers run. Builds can take up a lot of CPU and memory, and if you don't account for that, your server will crash faster than you can say: "Please don't crash! ๐ฐ".
2. Not cleaning up โ
No one likes to clean up, but itโs a necessary task if you donโt want to drown in your own ๐ฉ. Docker images can be huge - sometimes gigabytes in size. If you deploy new versions of your images and don't need the old ones anymore, you should get rid of them. Same goes for unused containers and volumes. They add up quickly and within no time your deployments fail, and you spend another 45 minutes until you figured out, that your disk is full...
Remove dangling (untagged, not in use) Docker objects with:
docker container prune
docker image prune
docker volume prune
docker system prune # this removes all dangling images, volumes and containers
Remove all unused Docker objects by adding -a
arg (also includes tagged images):
docker container prune -a
docker image prune -a
docker volume prune -a
docker system prune -a # this removes all unused images, volumes and containers
3. Leaking secrets in your images โ
It's not uncommon for applications to require access to secrets at build time. What people don't realize is, that an image, that baked in secrets at build time, needs to be treated like a secret! A Docker image is not a vault. Everything that you put inside can be read by everyone who has access to the image. (Except if you'd do something wild and encrypt it). So never publish images on Docker Hub, if you baked secrets into them!
If you can, avoid passing secrets at build time and rely on environment variables or use a secrets manager. If it is not possible at all, make sure to only build the image in a trusted environment, keep it in a private registry, only move it through encrypted wires and prune it as soon as you don't need it anymore...
Also check out this blogpost on ways to deal with build secrets.
4. No Monitoring in place โ
Containers are isolated and ephemeral, which is excellent for security and portability, but not ideal for monitoring.
Having to docker exec
into your container every time you want to check the log file is a real pain, so make sure to come prepared ahead of time.
The easiest thing you can do to begin with, is mount a volume to hold your log files, so that they are at least persistent. But without log rotation and ideally some way to search through logs in place, this setup quickly reaches it's limits.
In order get more visibility, you can implement log monitoring by streaming your data to an external system and also setting up resource monitoring tools to keep track of CPU and memory usage across your containers.
In professional environments monitoring is often done with the ELK Stack.
5. Not optimizing Docker images โ
As mentioned before, Docker images can be huge (up to several gigabytes in size!). The last mistake I want to mention here, is not optimizing your Docker images. Not only does it save a ton of space, it also reduces the attack surface and makes your deployments a lot faster.
Take this example on minimizing a Docker image for Nuxt 3. The author, Jonas Scholz, managed to bring down the image size from 1.4 GB to only 160 MB! That's almost a 10x improvement, just by following a few simple steps:
- use the smallest possible base image
- exclude unnecessary files in your .dockerignore file
- serve static assets from a CDN
- use multi stage builds
Summary
Common Docker deployment mistakes include:
- Skipping resource limits,
- Failing to clean up,
- Building sensitive data into your image,
- Neglecting logging and monitoring and
- Missing out on optimizing images.
If you want to make your life a bit easier, check out sliplane.io. It's a simple Docker hosting solution to help you avoid some of the pitfalls mentioned here.
Disclaimer: I co founded Sliplane :-)
Top comments (8)
easy to forget! switching from linux to mac caused some headache for me :-D
Wow this is beautiful.
I would love to read your posts more ๐
Thank you! :-)
Hi, thanks so much for taking the time to share this information.
You are welcome! :-)
Great article. The first paragraph was really helpful. I usually use docker-compose so I didn't know that limits could be set. Gotta try to set it next time I deploy. Thx
P.S. I really hope that I won't forget to do it next time
This was a good read. Thanks for the invaluable tips๐ซก