Docker adoption rises constantly 📈 and many are familiar with it, but not everyone is using Docker according to the best practices. 👀
Before mo...
For further actions, you may consider blocking this person and/or reporting abuse
Thank you for this useful information. But many of those points are not best practices, their are just you picked docker as your favorite vendor and orange is your favorite color. Official means different things to different people, in your article official means docker's official and having docker as your favorite vendor.
If I want centos, I would use quay.io/centos/centos:stream8 that's what official for me. If I want mysql and bitnami is my favorite vendor then official means to me docker.io/bitnami/mysql:8.0
Vendoring is about picking a vendor, to whom you open tickets or have a phone call to.
More over, I believe the docket's official images are very bad, when docker used to hire the late Ian Murdock (father of debian), debian and ubuntu was the official base image, when he passed a way and they hired the father of alpine, alpine is now the official base image.
It's even worse, they are the same people who have 80% of their official images vulnerable and they have forgotten the root password of alpine empty and wide open.
I had many cases were "npm install" break on alpine.
As a fedora contributor, I prefer a minimal fedora image from "registry.fedoraproject.org/fedora-minimal:35" and microdnf the exact node version I want from the dnf module of the version I want (microdnf module enable -y nodejs:14). I would trust that. and If I want ubuntu base image, I would trust NodeSource as node vendor in my docker file. I would recommend against the official docker hub "node" (I don't trust them)
In summary your preferred vendor is not a best practice. Pick the vendor you trust and the ones that you are comfortable to file tickets to and got them solved.
I'll post other points in different comment.
"by using smaller images with leaner OS distributions, which only bundle the necessary system tools and libraries, you're also minimizing the attack surface and making sure that you build more secure images"
Are you serious?
Yes, that's accurate. Having less tools means you are less exposed to vulnerabilities. That is a perfect example of smaller attack surface.
If you have a criticism at least provide a constructive explanation on what you see as a misunderstanding.
For example, let's say you followed the advice for not using root. But now you have something bad in your container that wants to gain local privilege escalation (e.g. through some vulnerability take over the host). Oftentimes, local tools may contain vulnerabilities that can allow such lpe.
So yes, I think that is a really good example for reducing attack surface.
Just an observation.
While latest tag is definitely a bad practice, that doesn't make fixed versions a "best practice". It can be a decent practice and a good rule of thumb but consider this:
The vast majority of docker image offerings for software that follows semver also versions their docker images accordingly. This means there will be rolling tags for major versions that will contain updated minor versions and patches as well as rolling tags for minor versions that contain updated patch versions only.
It can be a good idea to allow your build to at least follow the newest minor versions rolling tag since patches bring goodies like security updates in a non-breaking way.
there is a good point about layers, another example is to clear package manager cache after you finish in one layer that is:
because if you add file in a layer and remove it in another layer it would still count and carried in the archive, it would be just carried with a flag that it's removed.
regarding: Use specific Docker image versions
pinning the exact version is a security risk, one might pin only the major version allowing it to receive security updates so instead of node:17.0.1 just node:17, it's less likely to break the application depending on 17-specific features, it it would be able to receive security fixes from 17.0.2.
even better, use buildah (podman build) which does not need to archive and create and send the archive to the docker daemon.
another workaround, create a directory called containers and put the docker file inside it, where only the needed files are inside that directory.
this is very important, as someone who was part of that proposal, I'm very sad this feature is rarely used.
The compiler, git, intermediate files, ...etc should never be part of final image.
Very happy to have learned about these best practices, Nana! I was able to implement all of them and learned a lot along the way!
One thing I would recommend that I also learned about (plus some other great tips) was that you can ensure the correct Docker image is downloaded by utilizing the SHA-256 hashing feature as outlined in this article
Hi Nana...I follow your devops, docker videos. I was trying this example "https://www.youtube.com/watch?v=6YisG2GcXaw&list=PLy7NrYWoggjzfAHlUusx2wuDwfCrmJYcs&index=8" in AWS-EC2 ubuntu machine. I had set up and linked mongodb with mongo express. I even started the server.js using npm install command and it is listening in port 3000. When I tried to access the publicip:3000, the page is not loading and displays blank page. I attached the image. Please help me out.
About best practice 2 (use specific image version), is there a way to be informed if a newer version of the image is released?
Great article Nana. Thanks for sharing this valuable knowledge, keep it up!
As usual great content creator