DEV Community

Cover image for Kubernetes from Scratch: Bootstrapping a Cluster
Michael Braverman
Michael Braverman

Posted on • Updated on

Kubernetes from Scratch: Bootstrapping a Cluster

Creating a Kubernetes Cluster can be a complex process that entails so many option, but complex things can become simpler to grasp when broken down and investigated bit by bit. In this tutorial we will demystify the process of bootstrapping a Kubernetes cluster by understanding the bare-minimum components that are required to get a Kubernetes node running inside a Virtual Machine (VM).

Introduction

To begin with, it is important to have a Virtual Machine (VM) that is running a Debian-based or a RHEL-based Linux distribution. For this tutorial, we will be using a Debian 11 VM that is running inside a KVM virtual environment. The VM was provisioned using Terraform, which is a tool for building, changing, and versioning infrastructure safely and efficiently. Additionally, cloud-init was used to configure a VM with any necessary settings, such as networking and SSH access. A riced-up terminal configuration was applied that provides a more bearable terminal experience using my personal so called "dotfiles". FinallyTerraform configs for deploying ready-made virtual machines are also available at this link.


My riced up terminal heavily inspired by Garuda's configuration, an Arch-based Linux distribution.

Prerequisites

With our VM set up, we can move on to the next steps in bootstrapping a Kubernetes Cluster with cri-o as the container runtime. To begin with, we first have to make a few necessary changes in order for kubeadm init flight checks to pass. I will expand on both of these points below.

As the very first step, we should follow the number one best practice when it comes to software — checking for updates and installing the latest packages:

sudo apt update
sudo apt upgrade
Enter fullscreen mode Exit fullscreen mode

As well as a few dependencies that are necessary to download

sudo apt install software-properties-common apt-transport-https ca-certificates gnupg2 gpg sudo
Enter fullscreen mode Exit fullscreen mode

Image description

1. Disabling Swap

In Linux, swap is a useful way to extend the available RAM when the physical memory has been exhausted, allowing processes to continue running. However, when setting up a node for a Kubernetes cluster, it is generally recommended to disable swap for several reasons.

Firstly, Kubernetes requires a significant amount of memory to operate effectively, and any performance degradation due to swap usage can impact the performance of the entire cluster. Additionally, Kubernetes assumes that the node has a fixed amount of available memory, and if swap is enabled, it can cause confusion and unexpected behavior.

Furthermore, disabling swap can help prevent the risk of the so-called "OOM killer" from being invoked. The OOM killer is a Linux kernel process that is responsible for terminating processes when the system runs out of memory. While this is intended as a safeguard to prevent the system from crashing, it can lead to unpredictable behavior when running Kubernetes workloads, as the OOM killer may terminate critical components of the cluster.

We can see if our machine uses swap memory using the htop command:


In this screenshot swap is equal to 0. However. if its was otherwise, we would need to disable it.

Overall, while swap can be a useful tool for extending the available memory on a Linux machine, it is generally recommended to disable it when setting up a node for a Kubernetes cluster to ensure reliable and predictable performance. We can do so by issuing the following command:

swapoff -a
Enter fullscreen mode Exit fullscreen mode

To make sure that swap remains disabled after startup, we have to uncomment a line in /etc/fstab that initialized swap memory upon boot:

Image description

In this this specific case, a file called swap.img was used as a swap partition which we can go ahead and delete afterwards with root privileges:

sudo rm /swap.img 
Enter fullscreen mode Exit fullscreen mode

Note: In modern Linux distributions, a swap file is often used instead of a separate swap partition. If your system is configured with a separate swap partition, it is important to take that into consideration when setting up a Kubernetes cluster and avoid setting up a swap partition when installing a VM.

Now you can go ahead and reboot the machine and swap should now be disabled. Use htop once again to confirm that this is the case.

2. Enabling Linux kernel modules

Enabling the necessary Linux kernel modules is a crucial step in setting up a container runtime for a Kubernetes cluster. These modules are essential for providing networking and storage functionality to the Kubernetes Pods, which are the smallest and simplest units in the Kubernetes system. Networking modules enable Kubernetes to provide network connectivity between the different Pods in a cluster, while storage modules enable the persistent storage of data across different Pods and Nodes in the cluster.

In order to enable these kernel modules, we typically need to modify the Linux kernel parameters and load the relevant kernel modules using the modprobe utility. This ensures that the necessary functionality is available to the Kubernetes cluster and that the pods can communicate and store data effectively. By enabling these modules, we can ensure that our Kubernetes cluster is well-equipped to handle a range of tasks and can provide a reliable and scalable platform for running containerized applications.

Before we proceed, we will login under root account and follow all the steps below with superuser privileges:

sudo su - root
Enter fullscreen mode Exit fullscreen mode

With that out of the way, here are the two Linux kernel modules we need to enable:

  1. br_netfilter — This module is required to enable transparent masquerading and to facilitate Virtual Extensible LAN (VxLAN) traffic for communication between Kubernetes Pods across the cluster.
  2. overlay — This module provides the necessary kernel-level support for the overlay storage driver to function properly. By default, the overlay module may not be enabled on some Linux distributions, and therefore it is necessary to enable it manually before running Kubernetes.

We can enable these modules by issuing the modprobe command along with the -v (verbose) flag to see the results:

modprobe overlay -v
modprobe br_netfilter -v
Enter fullscreen mode Exit fullscreen mode

After which we should get the following output:

Image description

In order to make sure that the kernel modules get loaded after a reboot, we can also add them to the /etc/modules file:

echo "overlay" >> /etc/modules
echo "br_netfilter" >> /etc/modules 
Enter fullscreen mode Exit fullscreen mode

After enabling the br_netfilter module, we must enable IP forwarding on the Linux kernel in order to enable networking between Pods and Nodes. IP forwarding allows the Linux kernel to route packets from one network interface to another. By default, IP forwarding is disabled on most Linux distributions for security reasons, since it allows a machine to be used as a router.

However, in a Kubernetes cluster, we need IP forwarding to be enabled to allow Pods to communicate with each other, as well as to allow traffic to be forwarded to the outside world. Without IP forwarding, Pods would not be able to access external resources or communicate with each other, effectively breaking the cluster.

To do so, we must write "1" to a configuration file called "ip_forward":

echo 1 > /proc/sys/net/ipv4/ip_forward
Enter fullscreen mode Exit fullscreen mode

With these necessary steps for setting up the requirements our for Kubernetes cluster out of the way, we can proceed installing Kubelet — the beating heart of Kubernetes.


Installing Kubelet

Installing Kubelet is perhaps the easiest step since it is very well documented in the official Kubernetes documentation. Basically, we need to issue the following commands:

mkdir /etc/apt/keyrings
sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl 
Enter fullscreen mode Exit fullscreen mode

The line apt-mark hold kubelet kubeadm kubectl tells our package manager to avoid upgrading these components since this is something we would want to do manually when upgrading a cluster.

Run the command, cross your fingers and hope it works:

Image description

Not that we installed kubelet, kubeadm, and kubectl, we can now proceed installing a container runtime that will run Kubernetes components, Pods and containers.


Installing our container runtime

Kubernetes is an orchestration system for containerized workloads. It manages the deployment, scaling, and operation of containerized applications across a cluster of nodes. However, Kubernetes itself does not run containers directly. Instead, it relies on a container runtime, which is responsible for starting, stopping, and managing the containers. The container runtime is the software that runs the containers on the nodes in the Kubernetes cluster.

There are several container runtimes that can be used with Kubernetes, including Docker, cri-o, containerd, and others. The choice of container runtime depends on factors such as performance, security, and compatibility with other tools in the infrastructure. For our purposes, we will chose cri-o as our container runtime.

By following the official cri-o documentation, we first need to specify the variables that are necessary to download the desired cri-o version for our specific Linux distribution. Given that we are running Debian 11 and cri-o 1.24 is the latest version at the time of writing, we will export a few variables:

export OS=Debian_11
export VERSION=1.24
Enter fullscreen mode Exit fullscreen mode

We can also double check if these variables were save into our current terminal session by piping the env command to grep:

Image description

Now we can proceed installing the container runtime:

echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/Release.key | apt-key add -
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | apt-key add -
apt-get update
apt-get install -y cri-o cri-o-runc
Enter fullscreen mode Exit fullscreen mode

If successful, we should get the following output:

Image description

Now that we have cri-o packages installed, we must enable and start cri-o as a service:

systemctl enable crio
systemctl start crio
systemctl status crio
Enter fullscreen mode Exit fullscreen mode

The command systemctl status crio should will output the current service state:

Image description

Congrats! Our Kubernetes node is ready to be bootstrapped.


Bootstrapping our First node

I hope you strapped your boots tight because our Pod network will have a CIDR of 10.100.0.0/16.

What the hell is a Network CDR?

A network CIDR (Classless Inter-Domain Routing) is a notation used to represent a network prefix in IP addressing. It is a combination of an IP address and a subnet mask, represented in the form of "<IP address>/<subnet mask>". The subnet mask defines what part of the IP address is the network portion and which is the host portion.

In the case of the range 10.100.0.0/16, it means that the IP address is 10.100.0.0 and the subnet mask is 16 bits long. The subnet mask of 16 bits indicates that the first 16 bits of the IP address are the network portion and the remaining 16 bits are the host portion. This means that our host can manage up to 65,536 IP addresses, ranging from 10.100.0.0 to 10.100.255.255.

Now that we decided that our small but mighty Kubernetes cluster will have a network of 65,536 IP addresses, we can test our configuration.

Dry run

For bootstrapping our cluster will be using the official kubeadm utility. Before applying our changes we can go ahead and run our Network CIDR setting with the --dry-run flag without making any changes:

kubeadm init --pod-network-cidr=10.100.0.0/16 --dry-run
Enter fullscreen mode Exit fullscreen mode

If our VM was set up properly, we should get a long output after a minute:

Image description

If the so called "pre-flight checks" output an error, then by making a quick Google search we can fix the issue and the apply these changes without the --dry-run flag.

Initializing our Cluster

One we have a VM that passes the pre-flight checks, we can initialize our cluster.

kubeadm init --pod-network-cidr=10.100.0.0/16
Enter fullscreen mode Exit fullscreen mode

After issuing this command, kubeadm will turn our VM into a Kubernetes Control Plane node consisting of out of the following main components:

  • etcd — A key-value database store used for storing the state of the whole Kubernetes cluster;
  • kube-scheduler — A control plane component that watches for newly created Pods with no assigned node, and selects a node for them to run on;
  • kube-controller-manager — A Control plane component that runs controller processes.

If our run was successful, we should get an output with a command that can be used to join other nodes:

Image description

Take note of this join command:

kubeadm join 192.168.122.97:6443 --token nljqps.vypo4u9y07lsw7s2 \
        --discovery-token-ca-cert-hash sha256:f820767cfac10cca95cb7649569671a53a2240e1b91fcd12ebf1ca30c095c2d6
Enter fullscreen mode Exit fullscreen mode

Note: By default, this join command's token is only valid for 2 hours, after which you would have to tell kubeadm to issue a new token for joining other nodes.

Once we have our first Control Plane node bootstrapped, we can use crictl the same way we use the docker command and see what components are running in our cri-o container runtime:

Image description

As we can see, the above mentioned Kubernetes components are all running as containers inside our first Control Plane node.

Adding our first worker

By default, the Control Plane node only runs containers that are part of the Kubernetes system, but no Pods a container applications. Now that we have a working Control Plane node we can proceed joining our first worker node.

Before doing so, we must follow the same exact steps as we did when setting up our control plane node. I went ahead and opened a second vm called "worker1" on the the right pane of my tmux terminal window manager:

Image description

After going though the same procedure I copy the join token from the steps above:

kubeadm join 192.168.122.97:6443 --token nljqps.vypo4u9y07lsw7s2 \
        --discovery-token-ca-cert-hash sha256:f820767cfac10cca95cb7649569671a53a2240e1b91fcd12ebf1ca30c095c2d6
Enter fullscreen mode Exit fullscreen mode

And paste this into the worker1 window. The kubeadm will once again take a moment to go through all the flight checks until joining the worker to the Control Plane node.

Image description

Once that is done we can congratulate ourselves with setting a Kubernetes cluster from scratch! 🥳


Accessing our cluster with kubectl

The last step we have to do is to copy admin credentials that kubectl will use to manage our cluster's resource through the Kubernetes API server component running on our Control Plane node.

For the purposes of this tutorial, we will be accessing our cluster through the Control Plane node. We will copy the configuration to our user's home directory like so:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
Enter fullscreen mode Exit fullscreen mode

Once we do so we can use the kubectl command to manage, create, edit, delete and provision our Kubernetes cluster. To start with, we can get a list of nodes that are currently part of our Kubernetes cluster:

kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

And we get a list consisting of our Control Plane node and a master worker:

Image description

Now we can create Pods, Services, Namespaces and all the good Kubernetes stuff!


Conclusion

In conclusion, the process of bootstrapping a Kubernetes cluster with cri-o as the container runtime may seem daunting at first, but with the proper understanding and knowledge of the individual components involved, it can be achieved with relative ease. By following the steps outlined in this tutorial, we have gained a deeper understanding of how Kubernetes operates and the requirements necessary to set up a functional cluster. Armed with this knowledge, we can now confidently experiment and explore the full potential of Kubernetes in our development and production environments.

Top comments (0)