Throughout the world of Linux, and ultimately, Kubernetes, there have been a lot of use cases that haven’t changed in the underlying infrastructure. One of those use cases that have been around since Kubernetes inception is the use of IP Tables. However, as with all technologies, new innovation needs to happen to make environments more efficient.
In this blog post, you’ll learn about what eBPF is doing for Kubernetes and how to implement it with Cilium.
In short, eBPF is a way to remove the need to update Linux kernel code for certain programs to run and from a Kubernetes perspective, takes over kube-proxy’s responsibilities.
The topics around eBPF are vast. They can probably take up a whole book in itself. Because of that, this section can’t go through every piece of eBPF. However, let’s focus on a few key parts when it comes to Kubernetes.
- Removal of kube-proxy
- Easier scaling
kube-proxy has served Kubernetes well, but there’s a problem - it uses IP Tables. Although IP Tables have been in Linux for a long time, it doesn’t scale very well from a Kubernetes perspective. IP Table rules are stored in a list, and when Pods establish a new connection to a Kubernetes Service, it has to go through every single IP Table rule until it reaches the specific rule that it’s looking for. Although that doesn’t seem like a lot for a couple of rules, if you have thousands (which you most likely will), it’s a big performance concern.
From a scalability perspective, as the number of Kubernetes Services grows, the connection gets worse. One of the biggest reasons is that IP Table rules are not incremental when you create them, which means that kube-proxy writes out the whole table for every single update.
From a security perspective, Kubernetes NetworkPolicy doesn’t have to be handled at the IP Tables layer anymore. This speeds up policy management a bit because it allows you to have a finer-tuned approach without having to worry about several IP Table rules.
eBPF gives the ability to remove IP Tables by injecting directly into the Pod network. It improves the performance of routing and scalability by not having to round robin through IP Table rules.
There are a ton of different Container Network Interfaces (CNI) for Kubernetes. With so many different options, you have to imagine that there are many different functionalities. Some are more for “getting started quickly” from an ease-of-use perspective, and there are also CNI’s that are more security-focused or more “advanced”.
Cilium is a CNI that’s built around eBPF. It focuses heavily on networking, observability, and security for Kubernetes networking that uses eBPF to get the job done.
What it does from a networking perspective on the top level isn’t any different. It allows you to have networking in your Kubernetes cluster, have specific CIDR’s for Pods, ensure that Pods can communicate, and all of the rest of the networking pieces of a CNI. The biggest difference is that it does it all with an eBPF backend utilizing hash tables instead of having to use kube-proxy and IP Tables.
For the Cilium install, you will need a Kubernetes cluster. For the purposes of this blog post, it shows how to install Cilium on Kubeadm. However, please note that the installation method should be the same across Kubernetes clusters minus the command below for
If you’re planning to run Kubeadm, the following command is an example of what you should use. Even if you don’t use all of the flags, ensure that you use the
--skip-phases=addon/kube-proxy flag as this is needed so kube-proxy doesn’t get installed on the control plane.
sudo kubeadm init --skip-phases=addon/kube-proxy --control-plane-endpoint $publicIP --apiserver-advertise-address $ip_address --pod-network-cidr=$cidr --upload-certs
Next, ensure that Helm is installed.
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh
Add the Cilium Helm repo.
helm repo add cilium https://helm.cilium.io/
Install Cilium with Helm. The below Helm installation will do the following:
- Put Cilium in the
- Ensure that the kube-proxy replacement is set
- Specify the host of your control plane
- Specify the port of the control plane
helm install cilium cilium/cilium \ --namespace kube-system \ --set kubeProxyReplacement=strict \ --set k8sServiceHost=ip_address_of_control_plane \ --set k8sServicePort=6443
After a few minutes, ensure that the Cilium Pods are running successfully.
kube get pods -n kube-system
The output should look similar to the screenshot below.
Congrats! You have successfully set up Cilium for eBPF and removed the need for IP Tables and kube-proxy.