DEV Community

Cover image for Cloud Native Networking Using eBPF
Deepesha Burse
Deepesha Burse

Posted on

Cloud Native Networking Using eBPF

Cloud Native? What’s that?

Nowadays we expect softwares to be up 24/7, release frequent updates to keep up with their competitors, scale up and down as and when required, and so many other things. The limitations of monolithic architectures prevent us from being able to keep up with these ever-growing needs of clients.

The above mentioned requirements, combined with the availability of new platforms on which we run software, have directly resulted in the emergence of a new architectural style for software: cloud-native software. We focus on making the applications more stable and resilient than the infrastructure we run them on. There is a growing awareness of the benefits of deploying loosely coupled microservices using containers, most of which are orchestrated with Kubernetes.

Cloud-native software is highly distributed, must operate in a constantly changing environment, and is itself constantly changing.

What is Cloud Native Networking?

Cloud native architectures have specific network requirements, the majority of which are not met by traditional network infrastructure. Containers are altering how applications are developed, but they are also altering how applications communicate with one another. However, the network is also critical for these production deployments and must have cloud-native characteristics. Intelligent automation, elastic scalability, and security are all necessary. Because of the dynamism of containers, there is a greater need in this environment for visibility and observability.

Cloud Native Networking allows containers to communicate with other containers or hosts to share resources and data. Typically, it is based on the standards set by the Container Network Interface (CNI). The Container Network Interface was designed as a simple contract between network plugins and the container runtime. Many projects, including Apache Mesos, Kubernetes, and rkt, have adopted CNI.

CNI has the following key characteristics:

  • CNI defines the desired input and output from CNI network plugins using a JSON schema.
  • CNI allows you to run multiple plugins in a container that connects networks driven by various plugins.
  • When CNI plugins are invoked, CNI describes networks in configuration JSON files and instantiates them as new namespaces.
  • CNI plugins can support the addition and removal of container network interfaces from and from networks.

A project that implements CNI specs is a CNI Plugin.

More on CNI Plugins

CNI plugins fall under 3 major categories, i.e, routed networks, VXLAN overlays and additional features.


Categories of CNI Plugins

Routed Networks:
Usually, an implementation that falls under this category would be Kuberouter. The way it works is by installing routes on your hosts that are running into our containers and then propagating those
routes throughout all of your cluster. We can use Project Calico if we require support for any of the advanced features (like network policy in Kubernetes).

VXLAN Overlays:
This too, is used for connecting containers and helping them communicate with one another. The most simple implementation can be seen in flannel but if we want to use a VXLAN based CNI Plugin which supports advance features, like using gossip protocol to connect all the nodes and share information about the nodes inside the cluster we can use Weave Net.

Why Cilium?

As we can see, Cilium integrates all three major categories involving CNI Plugins. It offers a great deal of flexibility and features, such as Layer 7 Network Policy (the application layer where microservices communicate with one another), policy decisions based on application level traffic, and improved observability.

Cilium implements the CNI specification using eBPF and XDP, where XDP allows Cilium to connect to a physical interface as close to the physical interface as possible and BPF programs allow highly efficient packet processing with kernel-layer programs. Cilium loads endpoint/IP maps into BPF maps for BPF programmes to access quickly in the kernel.

Cilium Component Overview

Without getting into much detail, let us understand the flow of the working of Cilium. Suppose, in our case, we take Kubernetes as our input to Cilium’s policy repository. This input then goes to the Cilium Daemon where it is recompiled into bytecode. The code generated is injected into the BPF programs that are running in the kernel which connect our physical interface to our containers.


Source: https://docs.cilium.io/en/stable/concepts/overview/

References:
Talk on "Cloud Native Networking with eBPF" by Raymond Maika

Cloud Native Patterns by Cornelia Davis

Demystifying Cloud-Native Networking

Cloud-Native Network Functions

Top comments (0)