DEV Community

Cover image for Wait, Docker is deprecated in Kubernetes now? What do I do?
Kohei Ota
Kohei Ota

Posted on

Kubernetes Docker Deprecated Wait, Docker is deprecated in Kubernetes now? What do I do?

tl;dr

For developers

Don't panic, Docker containers and images are still alive. It's not that it will change everything.

Also worth reading:

https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/

https://kubernetes.io/blog/2020/12/02/dockershim-faq/

For K8s admins

Read this carefully and start considering Docker alternatives

Is it true?

Yes, it is true. Docker is now deprecated in Kubernetes.

Ref. https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#deprecation

Docker support in the kubelet is now deprecated and will be removed in a future release. The kubelet uses a module called "dockershim" which implements CRI support for Docker and it has seen maintenance issues in the Kubernetes community. We encourage you to evaluate moving to a container runtime that is a full-fledged implementation of CRI (v1alpha1 or v1 compliant) as they become available.

In short, what it means here is that Docker does not support Kubernetes Runtime API called CRI(Container Runtime Interface) and Kubernetes people have been using a bridge service called "dockershim". It converts Docker API and CRI, but it will no longer be provided from Kubernetes side within a few minor releases.

Docker in local is a very powerful tool to create dev environment for sure, but in order to understand what's causing this, you need to understand what Docker does in the current Kubernetes architecture.

Kubernetes is an infra orchestration tool that groups up many different compute resources such as virtual/physical machines and make it look like a huge compute resource for your application to run and share with others. In this architecture, Docker, or a container runtime, is used only to run those applications in an actual host by being scheduled by Kubernetes control plane.

Alt Text

Look at the architecture diagram. You can see that each Kubernetes node talks to the control plane. kubelet on each node fetch metadata and it execs CRI to run create/delete containers on the node.

But why is Docker deprecated?

Again, Kubernetes only talks in CRI and talking to Docker requires a bridge service. So that's reason 1.

To explain the next reason, we have to see the Docker architecture a bit. Here's the diagram.

Alt Text

So yeah, Kubernetes actually needs inside of the red area. Docker Network and Volume are not used in Kubernetes.

Having more features while you never use, itself can be a security risk. The less features you have, the smaller the attack surface becomes.

So this is where you start considering alternatives. It's called CRI runtimes.

CRI runtimes

There are two major CRI runtime implementations.

containerd

If you just want to migrate from Docker, this is the best option as containerd is actually used inside of Docker to do all the "runtime" jobs as you can see in the diagram above. They provides CRI and it's 100% what Docker provides, too.

containerd is 100% open source so you can see docs on GitHub and even contribute to it too.

https://github.com/containerd/containerd/

CRI-O

CRI-O is a CRI runtime mainly developed by Red Hat folks. In fact, this runtime is used in Red Hat OpenShift now. Yes, they do not depend on Docker anymore.

Interestingly, RHEL 7 does not officially support Docker either. Instead, they provide Podman, Buildah and CRI-O for container environment.

https://github.com/cri-o/cri-o

CRI-O's strength in my opinion is its minimalism because it was created to be a "CRI" runtime. While containerd started as a part of Docker trying to be more open source, they are pure CRI runtime so CRI-O does not have anything that CRI does not require.

It can be more challenging to migrate from Docker to CRI-O because of that, it still provides what you needs to run applications on Kubernetes.

One more thing...

When we talk about Container Runtimes, we need to be careful which type of runtime you're talking about. We do have two types of runtimes; CRI runtimes and OCI runtimes.

CRI runtimes

As I described, CRI is an API that Kubernetes provides to talk to a container runtime in order to create/delete containerised applications.

They talk in gRPC via IPC as kubelet and the runtime runs on the same host, and a CRI runtime has responsibility for getting request from kubelet and execute OCI container runtime to run a container. Wait, what? Maybe I should explain with a diagram for this one.

Alt Text

So what a CRI runtime does is the following

  1. Get the gRPC request from kubelet
  2. Create OCI json config following the spec

OCI runtimes

OCI runtimes are responsible for spawning a container using Linux kernel system calls such as cgroups and namespace. You might have heard about runc or gVisor. This is what they are.

appendix1: how runC works

Alt Text

runC spawns containers after CRI executes the binary by calling Linux system calls. That indicates runC relies on the kernel that is running on your Linux machine.

It also implies that if you ever discover runC's vulnerability that makes you take the root privilege of your host, a containerized application can also do so. A bad hacker could take your host machine's root and boom! Things surely will get bad. This is one of the reasons why you should keep updating your Docker(or any other container runtimes) too, not just your containerized application.

appendix2: how gVisor works

Alt Text

gVisor is an OCI runtime that were originally created by Google folks. It actually runs on their infrastructure to run their Cloud services such as Google Cloud Run, Google App Engine(2nd gen), and Google Cloud Functions(and even more!)

What's interesting here is that gVisor has a "guest kernel" layer which means a containerised applications cannot directly touch to the host kernel layer. Even if they think they do, they only touch the gVisor's guest kernel.

gVisor's security model is actually very interesting and worth reading the official doc.

Notable differences from runC is as follows.

Conclusion

1. Docker is surely deprecated but only in Kubernetes, so if you're a K8s admin, you should start thinking to adopt a CRI runtime such as containerd and CRI-O.

a. containerd is Docker compatible where the core components are the same.
b. CRI-O can be a strong option where you want more minimal functionality for Kubernetes

2. Know what the difference of CRI and OCI runtime responsibility and scope

Depending on your workload, runC might not be always the best option to use!

Top comments (20)

Collapse
 
__mrvik__ profile image
MrViK

Another OCI Runtime -> crun
It's faster than runc (the performance gains are notable when starting containers).
Also it supports cgroups v2, runc also added support on v1.0.0-93 (which has not been launched?)

podman + crun do work very well (also podman can launch rootless containers) so I prefer to use them instead of Docker + runc.

Collapse
 
sq5rix profile image
Tom

I just moved to podman/crun due to cgroups problem
Docker phases out?

Collapse
 
inductor profile image
Kohei Ota

Docker 20.10 will support cgroups v2

Collapse
 
__mrvik__ profile image
MrViK

Will, in the future. Podman does it right now and rootless containers are a huge improvement on process isolation

Thread Thread
 
inductor profile image
Kohei Ota

Docker also supports rootless now.

Collapse
 
jefftriplett profile image
Jeff Triplett (he/him)

PSA: This doesn't mean what most people think it means. kubernetes.io/blog/2020/12/02/dont...

Collapse
 
cawoodm profile image
Marc

For dummies like me:

  • Your Kubernetes > v1.20 (or whatever) won't use the Docker container runtime BUT
  • Your Docker-built containers will still run in Kubernetes... but with a different container runtime (e.g. ContainerD)

TL/DR: The Docker Container Runtime is history - your Docker-built containers will still run.

Collapse
 
akito13 profile image
Akito

Just adding my 2 cents from my own experience.

At work, we are using classical RKE Rancher with Kubernetes with Docker and k3s Rancher with Kubernetes with containerd. So I have experience with both. The problem is, that in our case it often seemed like containerd, at least the way it is used with Kubernetes, is not that stable and reliable. It showed us a lot of bugs and issues, which the legacy Kubernetes never had shown.

So I'm asking myself, how can they suddenly guarantee that all this new stuff is even production ready, when we had not too many but quite substantial issues with it?

(Perhaps the issue was mainly to be found in the whole Rancher thing, but we cannot confirm this.)

Collapse
 
kolaente profile image
kolaente

Great overview, thanks!

Having more features while you never use, itself can be a security risk. The less features you have, the smaller the attack surface becomes.

I found this while very true kind of ironic since we're talking about kubernetes here which in itself is a very complex system. That does not invalidate the point you're making though, it just makes another case for why less complexity in an already complex system is probably a good idea.

Collapse
 
starpebble profile image
starpebble

Fantastic post. My feedback: I hope this particular Kubernetes rearchitecture saves us all time. We are getting closer to making it realistic to have a container format specific to my favorite programming language. No need for us to all be the same.

Collapse
 
anmalkov profile image
Andrew Malkov

Both containerd and CRI-O know how to pull docker images and run them and Docker Images Manifest V2 and OCI image specification are almost the same, so we dont need to worry now.
But, if you want to know how to live without Docker I suggest you this video - How to live without Docker for developers - Part 1 | Migration from Docker to Buildah and Podman. Or just search Without Docker on YouTube.

Collapse
 
jrefi profile image
Justin Refi

So now I have to learn yet another framework (Buildah, Podman, etc) instead of just doing the easy thing and running Docker in Docker. The Kubernetes community sure loves creating new projects.

Collapse
 
warns profile image
Mert Alnuaimi

Seriously, calm down #kubernetes and #docker, what's going on guys?

Collapse
 
tylerauerbeck profile image
Tyler Auerbeck

A nice reference on the kubernetes blog as well for those interested in knowing more: kubernetes.io/blog/2020/12/02/dont...

Collapse
 
wizlee profile image
Wiz Lee

Great reference!

Collapse
 
duffn profile image
Nicholas Duffy

How does one tell what runtime is currently being used in a cluster?

Collapse
 
inductor profile image
Kohei Ota

kubectl get node -o wide

Collapse
 
duffn profile image
Nicholas Duffy

I never realized that was there. Thanks!