DEV Community

A B Vijay Kumar
A B Vijay Kumar

Posted on

Evolution of k8s worker nodes-CRI-O

Evolution of k8s worker nodes-CRI-O

Just a few months back, I never used to call containers as containers…I used to call them docker containers…when I heard that OpenShift is moving to CRI-O, I thought what's the big deal…to understand the “big deal”…I had to understand the evolution of the k8s worker node

Evolution

If you look at the evolution of the k8s architecture, there has been a significant change and optimization in the way the worker nodes have been running the containers…here are significant stages of the evolution, that I attempted to capture…

Stage 0 : docker is the captain

It started with a simple architecture of kubelets as the worker node agents that receive the command from admins, through api-server from the master node. The kubelets used docker runtime to launch the docker containers (pulling the images from the registry). This was all good…until the alternate container runtimes, with better performance & unique strengths, started appearing in the market, we realised that it would be good if we can plug and play these runtimes...the obvious design pattern we would use to fix this issue is ??? “adapter/proxy” pattern…right?? that led to the next stage.

Evolution is all about adapting to the changes in the ecosystem

Stage 1: CRI (Container Runtime Interface)

Container Runtime Interface (CRI) spec was introduced in K8s 1.5. CRI also consists of protocol buffers, gRPC API and libraries. This brought the abstraction layer, and acted as an adapter, with the help of gRPC client running in kubelet and gRPC server running in CRI Shim. This allowed a simpler way to run the various container runtimes.

Before we go any further…we need to understand what all functionality is expected from container runtimes. Container runtime used to manage. downloading the images, unpacking them, running them, and also handle the networking, storage. It was fine… until we starting realizing that this is like a monolith!!!

Let me layer these functionalities into 2 levels.

  • High level — Image management, transport, unpacking the images & API to send commands to run the container, network, storage (eg: rkt, docker, LXC, etc).

  • *Low Level *— run the containers.

It made more sense to split these functionalities into components that can be mixed and matched with various open-source options, that provide more optimizations and efficiencies…the obvious design/architecture pattern we would use to fix this issue is ??? “layering” pattern…right?? that led to the next stage.

Stage 2: CRI-O & OCI

So the OCI (Open Container Initiative), came up with a clear container runtime & image specification, which helped multi-platform support (Linux, Windows, VMs etc). Runc is the default implementation of OCI, and that is the low level, of container runtime.

The modern container runtimes are built on this layered architecture, where Kubelets talk to Container Runtimes through CRI-gRPC and the Container Runtimes run the containers through OCI.

There are various implementations of CRI such as Dockershim, CRI-O, containerD.

Towards the end of Stage 1, I mentioned the flexibility to create a toolkit for end to end container management… and that needed Captain America to assemble the avengers, to provide an end to end container platform…

Avengers of k8s world - led by Captain “OpenShift”

  • podman: deamonless container engine, for developing managing and running OCI containers, and speaks exact docker CLI language, to the extent where u can just Alias it

  • skopeo: a complete container management CLI tool. One of the best features I love about skopeo, is the ability to inspect the images, on the remote registry, without downloading or unpacking!!!…and it matured into a full-fledged image management tool for remote registries, including signing images, copying between registries & keeping remote registries in sync. This significantly increases the pace of container build, manage and deploy pipelines…

  • buildah: a tool that helps build the OCI images, incrementally!!!..yes incrementally…I was playing around this the other day. I don’t have to imagine the image composition, and write a complex Dockerfile..instead, I just build the image one layer at a time, test it, rollback (if required), and once I am satisfied, I can commit it to the registry…how cool is that!!!

  • cri-o: light-weight container runtime for k8s…will write more about this in the next section.

  • OpenShift: End to end container platform…the real Captain!!

Red Hat OpenShift goes for CRI-O

Red Hat OpenShift 4.x defaults to CRI-O as the Container runtime. A lot of this decision (in my opinion) goes back to the choice of building an immutable infrastructure based on CoreOS, on which the OpenShift 4.x runs. CRI-O was obvious with CoreOS as the base, and all the more, CRI-O is governed by k8s community, completely Open Source, very lean, directly implements k8s container runtime interface…refer these 6 reasons in detail

Here is a great picture taken from this blog, that shows how CRI-O works under the wood in Red Hat OpenShift 4.x

References

Top comments (0)