DEV Community

Yogesh Sharma for AWS Community Builders

Posted on • Updated on

EKS Networking VPC CNI

In the rapidly evolving landscape of cloud-native applications, the demand for flexible and performant networking solutions has never been greater. Amazon Elastic Kubernetes Service (EKS) provides a powerful foundation for container orchestration, and when coupled with custom Container Networking Interface (CNI) configurations, it opens up a world of possibilities for fine-tuning your networking architecture. In this blog, we will walk you through the step-by-step process of enabling custom CNI networking in your Amazon EKS clusters, empowering you to harness the full potential of AWS networking capabilities. From expanding address space to achieving finer-grained segmentation, this guide will equip you with the knowledge to optimize your EKS networking for even the most demanding workloads.

Basic Pod networking

K8s network communication happens in several ways, depending on the sources and destinations:

  • As the containers in a Pod share the same network namespace and port space, they can communicate with each other using a localhost (127.0.0.1) address. pod networking
  • Each Pod has a corresponding interface (veth) in the root network namespace of the host, as well as its own interface in its network namespace. This is known as a veth pair, which acts as a virtual network cable between the Pod network namespace and the host networking, which has the actual Ethernet interface. Pods that want to talk to each other use the cluster DNS to resolve a service name to an IP address, and the ARP protocol to map the IP address to a Pod Ethernet address.
  • If the Pod is on another node, the cluster DNS resolves the IP address. In cases where the ARP request fails, the packet is routed out of the host to the network where it hopefully finds a route to the target IP address.

Understand EKS Networking

EKS is a managed service, and the control plane is managed by AWS in a separate VPC. Let’s start with a basic EKS deployment, a private cluster with two EC2 instances in a node group. The cluster has been configured to connect to two private VPC subnets; the node group is also deployed to the same two subnets.

EKS network
EKS is deployed with the AWS VPC CNI as the default CNI for the cluster. The vpc-cni works in conjunction with the kubelet agent to request and map an IP address from the VPC to the ENI used by the host and then assign it to the Pod. The number of EC2 ENIs and therefore the number of IP addresses that can be assigned to Pods is limited per EC2 instance type. For example, a m4.4xlarge node can have up to 8 ENIs, and each ENI can have up to 30 IP addresses, which means you can theoretically support up to 120 addresses per worker node
The disadvantage to this approach is that the EKS cluster, given the ephemeral nature of Pods/containers, can quickly eat all your available subnet addresses, preventing you from deploying new Pods and/or other AWS services such as databases (RDS). This is particularly problematic if you have small VPC or subnet IP (CIDR) ranges.

Non-routable secondary addresses

The concept of non-routable is to use an existing range used on premises, or ideally one of the new non-RFC1918 ranges that is not routed on premises for AWS for Pod addresses, allowing a large range to be used.

secondary ip addr

Prefix addressing

The default behavior with EC2 worker nodes involves allocating the number of addresses available to assign to Pods based on the number of IP addresses assigned to ENIs as well as the number of network interfaces attached to your Amazon EC2 node. For example, the m5.large node can have up to 3 ENIs, and each ENI can have up to 10 IP addresses, so with some limits, it can support 29 Pods based on the following calculation:

3 ENIs * (10 IP addresses -1) + 2 (AWS CNI and kube-proxy Pods per node) = 29 Pods per node

Version 1.9.0 or later of the Amazon VPC CNI supports prefix assignment mode, enabling you to run more Pods per node on AWS Nitro-based EC2 instance types. This is done by assigning /28 IPv4 address prefixes to each of the host ENIs as long as you have enough space in your VPC CIDR range:

3 ENIs * (9 prefixes per ENI * 16 IPs per prefix) + 2 = 434 Pods per node

However, please note that the Kubernetes scalability guide recommends a maximum number of 110 Pods per node, and in most cases this will be the maximum enforced by the CNI. Prefix addressing can be used in conjunction with non-routable addresses as it will only work if the VPC CIDR is able to allocate contiguous /28 subnets from the VPC CIDR.

With custom CNI networking, you've not only elevated your EKS clusters but also set the stage for a more efficient, scalable, and secure cloud-native environment. As you navigate this dynamic landscape, keep exploring new AWS features and best practices to stay at the forefront of cloud technology. Remember, the key to success lies in thoughtful planning, careful execution, and continuous monitoring. Regularly evaluate your networking architecture to ensure it aligns with your evolving application requirements. Leverage CloudWatch metrics and VPC Flow Logs for deep insights into your network's performance.

Top comments (0)