DEV Community

Cover image for Kubernetes on CRI-O (CentOS)
Abhishek Vaidya
Abhishek Vaidya

Posted on • Updated on

Kubernetes on CRI-O (CentOS)

Docker is now deprecated in Kubernetes

Yes, it is true.

Docker support in the kubelet is now deprecated and will be removed in a future release. The kubelet uses a module called "dockershim" which implements CRI support for Docker and it has seen maintenance issues in the Kubernetes community.

So, Now Kubernetes is encouraging to use different Container Runtime Interface instead of docker.

Why CRI-O ?

CRI-O is an implementation of the Kubernetes CRI (Container Runtime Interface) to enable using OCI (Open Container Initiative) compatible runtimes.

It is a lightweight alternative to using Docker, Moby or rkt as the runtime for Kubernetes.

CRI-O is a CRI runtime mainly developed by Red Hat folks. In fact, this runtime is used in Red Hat OpenShift now. Yes, they do not depend on Docker anymore.

Interestingly, RHEL 7 does not officially support Docker either. Instead, they provide Podman, Buildah and CRI-O for container environment.

Let's get started

Here we are deploying a 3 node cluster:

  • one master node
  • two worker nodes

Installing CRI-O (should be done on all nodes)

  • Create a enviroment variable for OS.
$ export OS=CentOS_7
Enter fullscreen mode Exit fullscreen mode

Note: If you are using CentOS 8 then give variable name as CentOS_8

  • Create another environment variable for VERSION of crio.
$ export VERSION=1.17
Enter fullscreen mode Exit fullscreen mode

Note: If you are using CentOS 8 then you can also install crio version 1.18

  • Configure REPO for cri-o.
$ sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/devel:kubic:libcontainers:stable.repo
Enter fullscreen mode Exit fullscreen mode
$ sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo
Enter fullscreen mode Exit fullscreen mode
  • Install cri-o.
$ sudo yum install cri-o -y
Enter fullscreen mode Exit fullscreen mode
  • Start and enable the cri-o service.
$ sudo systemctl start cri-o 
Enter fullscreen mode Exit fullscreen mode
  • If any issue with starting cri-o, refer troubleshooting section !!
$ sudo systemctl enable cri-o
Enter fullscreen mode Exit fullscreen mode

Troubleshooting

Not able to start cri-o service:

$ sudo systemctl status cri-o

crio.service - Container Runtime Interface for OCI (CRI-O)
   Loaded: loaded (/usr/lib/systemd/system/crio.service; disabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Fri 2020-12-11 05:42:49 EST; 29s ago
     Docs: https://github.com/cri-o/cri-o
  Process: 1259 ExecStart=/usr/bin/crio $CRIO_CONFIG_OPTIONS $CRIO_RUNTIME_OPTIONS $CRIO_STORAGE_OPTIONS $CRIO_NETWORK_OPTIONS $CRIO_METRICS_OPTIONS (code=exited, status=1/FAILURE)
 Main PID: 1259 (code=exited, status=1/FAILURE)

Dec 11 05:42:49 ip-172-31-89-94.ec2.internal systemd[1]: Starting Container Runtime Interface for OCI (CRI-O)...
Dec 11 05:42:49 ip-172-31-89-94.ec2.internal crio[1259]: time="2020-12-11 05:42:49.409813214-05:00" level=fatal msg="Validating root config: failed to get store to set defaults: kernel does not support overlay fs: overlay: the backing xfs filesystem is formatted without d_type support, which leads to incorrect behavior. Reformat the filesystem with ftype=1 to enable d_type support. Running without d_type is not supported.: driver not supported"
Dec 11 05:42:49 ip-172-31-89-94.ec2.internal systemd[1]: crio.service: main process exited, code=exited, status=1/FAILURE
Dec 11 05:42:49 ip-172-31-89-94.ec2.internal systemd[1]: Failed to start Container Runtime Interface for OCI (CRI-O).
Dec 11 05:42:49 ip-172-31-89-94.ec2.internal systemd[1]: Unit crio.service entered failed state.
Dec 11 05:42:49 ip-172-31-89-94.ec2.internal systemd[1]: crio.service failed.
Enter fullscreen mode Exit fullscreen mode

Root Cause:

The default container storage i.e. /var/lib/containers/storage is mounted with ftype=0, so d_type is disabled ( d_type enables mapping of filename to its inode ). For overlay, ftype=1 is a requirement !!

$ sudo xfs_info /var/lib/containers/storage | grep ftype

naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
Enter fullscreen mode Exit fullscreen mode

Solution:

  • Create a Logical Volume (LV).
$ sudo fdisk <disk_name>
Enter fullscreen mode Exit fullscreen mode
$ sudo partprobe <disk_name>
Enter fullscreen mode Exit fullscreen mode
$ sudo vgcreate vgname <partition>
Enter fullscreen mode Exit fullscreen mode
$ sudo lvcreate -l 100%FREE -n lvname vgname
Enter fullscreen mode Exit fullscreen mode
$ sudo lvs

LV     VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
root   centos -wi-ao----  <6.67g
swap   centos -wi-ao---- 820.00m
lvname vgname -wi-a-----  <5.00g
Enter fullscreen mode Exit fullscreen mode
  • Format LV with XFS filesystem
$ sudo mkfs.xfs /dev/mapper/vgname-lvname

meta-data=/dev/mapper/vgname-lvname isize=512    agcount=4, agsize=327424 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=1309696, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
Enter fullscreen mode Exit fullscreen mode
  • Now ftype is set to 1.
  • Mount LV on /var/lib/containers/storage permanently.
$ sudo cat /etc/fstab | grep /var/lib/containers/storage

/dev/mapper/vgname-lvname /var/lib/containers/storage xfs defaults 0 0
Enter fullscreen mode Exit fullscreen mode
$ sudo mount -a
Enter fullscreen mode Exit fullscreen mode
$ sudo df -hT | grep /var/lib/containers/storage

/dev/mapper/vgname-lvname xfs       5.0G   33M  5.0G   1% /var/lib/containers/storage
Enter fullscreen mode Exit fullscreen mode
  • Change runroot path and un-hash both runroot & root in /etc/crio/crio.conf.
$ sudo cat /etc/crio/crio.conf | grep /var/lib/containers/storage

root = "/var/lib/containers/storage"
runroot = "/var/lib/containers/storage"
Enter fullscreen mode Exit fullscreen mode
  • Then start and enable the cri-o service.
$ sudo systemctl start cri-o
Enter fullscreen mode Exit fullscreen mode
$ sudo systemctl enable cri-o
Enter fullscreen mode Exit fullscreen mode

Install and configure prerequisites for Kubernetes: (should be done on all nodes)

  • Load overlay module.
$ sudo modprobe overlay
Enter fullscreen mode Exit fullscreen mode
  • Load br_netfilter module.
$ sudo modprobe br_netfilter
Enter fullscreen mode Exit fullscreen mode

This module is enabled so that iptables, ip6tables and arptables can filter bridged IPv4/IPv6/ARP packets and makes the firewall transparent.

  • Set up required sysctl params, these persist across reboots.
$ cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
Enter fullscreen mode Exit fullscreen mode
  • Apply sysctl params without reboot.
$ sudo sysctl --system
Enter fullscreen mode Exit fullscreen mode
  • Disabling firewall.
$ sudo systemctl stop firewalld && sudo systemctl disable firewalld
Enter fullscreen mode Exit fullscreen mode
  • Set SELinux in permissive mode (effectively disabling it).
$ sudo setenforce 0
Enter fullscreen mode Exit fullscreen mode
$ sudo sed -i -e s/SELINUX=enforcing/SELINUX=permissive/g /etc/sysconfig/selinux
Enter fullscreen mode Exit fullscreen mode
  • Switching off swap.
$ sudo swapoff -a
Enter fullscreen mode Exit fullscreen mode
  • Also disable the swap in /etc/fstab (comment swap entry).
$ sudo cat /etc/fstab | grep swap

#/dev/mapper/centos-swap                                swap                    swap    defaults        0 0
Enter fullscreen mode Exit fullscreen mode
  • Change cgroup_manager of crio in /etc/crio/crio.conf.
$ sudo cat /etc/crio/crio.conf | grep cgroup_manager

cgroup_manager = "cgroupfs"
Enter fullscreen mode Exit fullscreen mode
  • Configure Kubernetes repo.
$ sudo cat /etc/yum.repos.d/kube.repo

[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Enter fullscreen mode Exit fullscreen mode

Deploying Kubernetes Cluster

COMMANDS ON MASTER

  • Install kubeadm package.
$ sudo yum install kubeadm -y
Enter fullscreen mode Exit fullscreen mode
  • Start and enable kubelet.
$ sudo systemctl start kubelet && sudo systemctl enable kubelet
Enter fullscreen mode Exit fullscreen mode
  • Now initialize the Kubernetes control-plane and master.
$ sudo kubeadm init
Enter fullscreen mode Exit fullscreen mode

Note: If giving error of iptables does not exist, just load br_netfilter module again and run the command.

  • Output
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.46.157:6443 --token 7xlnp0.9uv4z0qr4wvzhtqn \
    --discovery-token-ca-cert-hash sha256:4a1a412d2e682556df0bf10dc380c744a98eb99e8c927fa58eb025d5ff7dc694
Enter fullscreen mode Exit fullscreen mode
  • To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:
$ mkdir -p $HOME/.kube
Enter fullscreen mode Exit fullscreen mode
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
Enter fullscreen mode Exit fullscreen mode
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Enter fullscreen mode Exit fullscreen mode
  • These commands store your Kubernetes configuration file in home directory.
  • Make a record of the kubeadm join command that kubeadm init outputs. You need this command to join nodes to your cluster.
  • We need to install CNI plugin. So that pods can communicate with each other.
  • There are different CNI plugins for different purpose. Here we are using Weave Net CNI plugin.
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
Enter fullscreen mode Exit fullscreen mode

Weave Net creates a virtual network that connects containers across multiple hosts and enables their automatic discovery.

  • Once the CNI plugin has been installed, you can confirm that it is working by checking that the CoreDNS Pod is up and running.
$ kubectl get pods -n kube-system
Enter fullscreen mode Exit fullscreen mode
  • Our master is ready !!
$ kubectl get nodes

NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   14m   v1.20.0
Enter fullscreen mode Exit fullscreen mode
  • Now we need to add worker nodes to the cluster.

COMMANDS ON WORKER NODES

  • Install kubeadm package.
$ sudo yum install kubeadm -y
Enter fullscreen mode Exit fullscreen mode
  • Start and enable kubelet.
$ sudo systemctl start kubelet && sudo systemctl enable kubelet
Enter fullscreen mode Exit fullscreen mode
  • Run the kubeadm join command. (command in the output of kubeadm init)
$ sudo kubeadm join 192.168.46.157:6443 --token 7xlnp0.9uv4z0qr4wvzhtqn \
    --discovery-token-ca-cert-hash sha256:4a1a412d2e682556df0bf10dc380c744a98eb99e8c927fa58eb025d5ff7dc694
Enter fullscreen mode Exit fullscreen mode
  • Output
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Enter fullscreen mode Exit fullscreen mode
  • Now our worker nodes have joined the cluster !!

COMMAND ON MASTER

  • Check whether nodes are ready.
$ kubectl get nodes

NAME      STATUS   ROLES                  AGE     VERSION
master    Ready    control-plane,master   15m     v1.20.0
worker1   Ready    <none>                 4m44s   v1.20.0
worker2   Ready    <none>                 4m18s   v1.20.0
Enter fullscreen mode Exit fullscreen mode

HURRAH !! Kubernetes cluster is ready

THANK YOU!!

Top comments (0)