DEV Community

Rich Burroughs for Loft Labs, Inc.

Posted on • Originally published at loft.sh on

Kubernetes Network Policies: A Practitioner's Guide

by Levent Ogut

Picture of a set of stairs leading up through a series of Japanese gates

Providing security for our infrastructure and applications is a never-ending continuous process. This article will talk about security in Kubernetes clusters, traffic incoming and outgoing to/from the cluster, and the traffic within the cluster. Some organizations behave as if their own workloads can be malicious and design their security policies accordingly. In addition, in today's world, we all use third-party plugins, libraries, and pieces of code from external resources. Although this has been increasing productivity, this also brings many security concerns. Isolating the traffic incoming and outgoing to our applications to only what’s absolutely necessary is one of the best approaches there is.

Why We Need Network Policies

It is of paramount importance to secure the traffic in our clusters. By default, all pods can talk to all pods with no restriction. NetworkPolicy resource allows us to restrict the ingress and egress traffic to/from pods. For example, it provides the means to restrict the ingress traffic of a database pod to only backend pods, and it can restrict ingress traffic of a backend pod's traffic to a frontend application only. This way, we can secure our resources so that only legitimate traffic is allowed to/from the applications. An example would be limiting traffic so that our frontend pods can only connect to the backend application, so that an attacker who compromises the front end can’t directly access the database or any other pods.

The functionality of controlling traffic is typically achieved in networks by using firewalls (software or hardware). Here in Kubernetes that functionality is implemented by the network plugins and controlled by network policies. Note that network policies are not a replacement for firewalls.

Network policy example

Requirements for Implementing Network Policies

Kubernetes provides networking functionality by using network plugins. Unless you have a network plugin that can implement network policies, you will not be able to use the functionality. Please note that even if the API server accepts the network policy configuration, this doesn't mean that it will be implemented unless a controller understands and implements the policy. Several network plugins support network policies and much more.

Network Plugins

There are two types of network plugins:

  • CNI
  • Kubenet

CNI type plugins follow the Container Network Interface spec and are used by the community to create advanced featured plugins. On the other hand, Kubenet utilizes bridge and host-local CNI plugins and has basic features.

Several network plugins were developed from various organizations, including but not limited to Calico, Cilium, and Kube-Router. A complete list can be found in Cluster Networking documentation. These network plugins provide Network Policy implementation and more, such as advanced monitoring, L7 filtering, integration to cloud networks, etc.

While some network plugins use Netfilter/iptables in their underlying infrastructure, others use eBPF technology on the underlying data path. Netfilter/iptables is very mature and builtin into the kernel. On the other hand, eBPF allows you to change the functionality on the fly without kernel upgrade. Not being dependent on the kernel version has led some big players to use eBPF based network plugins on very large scales.

It is imperative to select the correct network plugin for your Kubernetes cluster(s). If you are using cloud providers for your Kubernetes setup (such as AWS, Azure, GCP), they might already have deployed a network plugin that supports network policies. Please check the cloud provider documentation for further details.

Writing & Applying Network Policies

Isolation

In a Kubernetes cluster, by default, all pods are non-isolated, meaning all ingress and egress traffic is allowed. Once a network policy is applied and has a matching selector, the pod becomes isolated, meaning the pod will reject all traffic that is not permitted by the aggregate of the network policies applied. The order of the policies is not important; an aggregate of the policies is applied.

Network Policy Resource Fields

Fields to define when writing network policies:

  • podSelector

  podSelector selects a group of pods using a LabelSelector. If empty, it would select all pods in the namespace, so beware when using it.

  • policyTypes

  policyTypes lists the type of rules that network policy includes. Value can be ingress, egress, or both.

  ingress defines the rules that will be applied to the ingress traffic of the selected pod(s). If it is empty, it matches all the ingress traffic. If it is absent, it doesn't affect ingress traffic.

  egress defines the rules that will be applied to the egress traffic of the selected pod(s). If it is empty, it matches all the egress traffic. If it is absent, it doesn't affect egress traffic.

Egress Rules

An array of rules that would be applied to the traffic going out of the pod. It is defined with the following fields.

Fields:

Ingress Rules

An array of rules that would be applied to the traffic coming into the pod. It is defined with the following fields.

Fields:

Walkthrough

Let's do a walkthrough of a network policy defined as below.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: network-policy-walkthrough-db
spec:
  podSelector:
    matchLabels:
      component: database
  policyTypes:
  - Ingress
  ingress:
  - from:
    - ipBlock:
        cidr: 192.168.1.2/32
    - namespaceSelector:
        matchLabels:
          team: dba
    - podSelector:
        matchLabels:
          component: backend
    ports:
    - protocol: TCP
      port: 5432
Enter fullscreen mode Exit fullscreen mode

This rule applies to all pods that have labels with component key and database value (component=database). Network policy affects only ingress traffic as defined in policyTypes\.

The tree ingress rule entries are evaluated with OR. Let's look at how Kubernetes interpreted the configuration using describe\ subcommand:

$ kubectl describe networkpolicy network-policy-walkthrough-db
Enter fullscreen mode Exit fullscreen mode
Name:         network-policy-walkthrough-db
Namespace:    default
Created on:   2021-08-30 18:06:48 +0200 CEST
Labels:       <none>
Annotations:  <none>
Spec:
  PodSelector:     component=database
  Allowing ingress traffic:
    To Port: 5432/TCP
    From:
      IPBlock:
        CIDR: 192.168.1.2/32
        Except: 
    From:
      NamespaceSelector: team=dba
    From:
      PodSelector: component=backend
  Not affecting egress traffic
  Policy Types: Ingress
Enter fullscreen mode Exit fullscreen mode

Host with IP "192.168.1.2", all pods in a namespace that have team label set to dba and all pods in the same namespace that has label component set to backend are allowed to reach on port 5432.

Examples

Default Deny Ingress

An all deny ingress rule with an empty podSelector (selecting all pods in the namespace) is a good starting point for a fresh cluster. You can then explicitly allow required traffic. As the podSelector is empty, it will continue to match new pods when they arrive.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress-policy
spec:
  podSelector: {}
  policyTypes:
  - Ingress
Enter fullscreen mode Exit fullscreen mode
$ kubectl describe networkpolicies default-deny-ingress-policy
Enter fullscreen mode Exit fullscreen mode
Name:         default-deny-ingress-policy
Namespace:    default
Created on:   2021-08-28 16:47:33 +0200 CEST
Labels:       <none>
Annotations:  <none>
Spec:
  PodSelector:     <none> (Allowing the specific traffic to all pods in this namespace)
  Allowing ingress traffic:
    <none> (Selected pods are isolated for ingress connectivity)
  Not affecting egress traffic
  Policy Types: Ingress
Enter fullscreen mode Exit fullscreen mode

As you can see, Kubernetes interpreted our configuration as intended. All pods in the namespace are now isolated, no ingress traffic is allowed to the pods, and egress traffic is not affected.

Allow Access to a Group of Pods from Another Namespace

In this example, we will look at a network policy that allows debugging pods to connect to the application pods.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-debug
spec:
  podSelector:
    matchLabels:
      component: app
  ingress: 
  - from:
    - podSelector:
        matchLabels:
          component: debug
      namespaceSelector:
        matchLabels:
          space: monitoring
  policyTypes:
  - Ingress
Enter fullscreen mode Exit fullscreen mode

Please note that here we have a single from rule. If we had put the namespaceSelector into its own rule, the meaning would change drastically; this is where podSelector and namespaceSelector are used together.

Let's check how Kubernetes interpreted the policy:

$ kubectl describe networkpolicy allow-debug
Enter fullscreen mode Exit fullscreen mode
Name:         allow-debug
Namespace:    default
Created on:   2021-08-30 22:36:48 +0200 CEST
Labels:       <none>
Annotations:  <none>
Spec:
  PodSelector:     component=app
  Allowing ingress traffic:
    To Port: <any> (traffic allowed to all ports)
    From:
      NamespaceSelector: space=monitoring
      PodSelector: component=debug
  Not affecting egress traffic
  Policy Types: Ingress
Enter fullscreen mode Exit fullscreen mode

Here we only allow ingress traffic from pods with a label component set to debug in the namespaces with the label space set to monitoring.

Monitoring Network Policies

Monitoring the network policies and their behavior is an essential part of the deployment. Kubernetes offers the kubectl describe networkpolicy <NETWORK_POLICY_NAME> command to see how Kubernetes interpreted the network policy configuration. For detailed analysis, check out the network plugin's tools. Here we have a Kubernetes cluster with Cilium network plugin. Cilium offers a CLI tool, and from there, we can monitor the packets.

Let's get the IP address of our pod:

$ kubectl get pods -o wide
Enter fullscreen mode Exit fullscreen mode
NAME                                READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
nginx-deployment-66b6c48dd5-frsv9   1/1     Running   0          24m   10.0.0.136   valhalla   <none>           <none>
Enter fullscreen mode Exit fullscreen mode

Let's get the endpoint id (in Cilium) of the pod:

$ cilium endpoint list
Enter fullscreen mode Exit fullscreen mode
ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])  IPv6   IPv4         STATUS    ENFORCEMENT        ENFORCEMENT                                                                                                                   
5          Enabled            Disabled          50873      k8s:app=nginx                       10.0.0.136   ready   
...
Enter fullscreen mode Exit fullscreen mode

We will monitor all traffic that goes to and comes from the endpoint with id 5.

Press Ctrl-C to quit

level=info msg="Initializing dissection cache..." subsys=monitor
Policy verdict log: flow 0xf9da54c5 local EP ID 5, remote ID 1, dst port 80, proto 6, ingress true, action allow, match L3-Only, 10.0.0.147:39772 -> 10.0.0.136:80 tcp SYN
-> endpoint 5 flow 0xf9da54c5 identity 1->50873 state new ifindex lxc4eced79e6ca0 orig-ip 10.0.0.147: 10.0.0.147:39772 -> 10.0.0.136:80 tcp SYN
-> stack flow 0xbbd5210b identity 50873->1 state reply ifindex 0 orig-ip 0.0.0.0: 10.0.0.136:80 -> 10.0.0.147:39772 tcp SYN, ACK
-> endpoint 5 flow 0xf9da54c5 identity 1->50873 state established ifindex lxc4eced79e6ca0 orig-ip 10.0.0.147: 10.0.0.147:39772 -> 10.0.0.136:80 tcp ACK
-> endpoint 5 flow 0xf9da54c5 identity 1->50873 state established ifindex lxc4eced79e6ca0 orig-ip 10.0.0.147: 10.0.0.147:39772 -> 10.0.0.136:80 tcp ACK
-> stack flow 0xbbd5210b identity 50873->1 state reply ifindex 0 orig-ip 0.0.0.0: 10.0.0.136:80 -> 10.0.0.147:39772 tcp ACK
-> endpoint 5 flow 0xf9da54c5 identity 1->50873 state established ifindex lxc4eced79e6ca0 orig-ip 10.0.0.147: 10.0.0.147:39772 -> 10.0.0.136:80 tcp ACK, FIN
-> stack flow 0xbbd5210b identity 50873->1 state reply ifindex 0 orig-ip 0.0.0.0: 10.0.0.136:80 -> 10.0.0.147:39772 tcp ACK, FIN
-> endpoint 5 flow 0xf9da54c5 identity 1->50873 state established ifindex lxc4eced79e6ca0 orig-ip 10.0.0.147: 10.0.0.147:39772 -> 10.0.0.136:80 tcp ACK
Enter fullscreen mode Exit fullscreen mode

We can see the traffic destined to our NGINX pod.

Policy verdict log: flow 0xf9da54c5 local EP ID 5, remote ID 1, dst port 80, proto 6, ingress true, action allow, match L3-Only, 10.0.0.147:39772 -> 10.0.0.136:80 tcp SYN
Enter fullscreen mode Exit fullscreen mode

We can see the policy evaluation here.

Conclusion

We have explored why and how network policies are used within a Kubernetes cluster. Allowing only the required traffic is a security best practice, and Kubernetes allows us to implement this via declarative configuration of network policies. Since network policies depend heavily on the labels of pods/namespaces, it is straightforward to deploy rules that would also capture newly created resources. 

It is highly recommended to test the network policies before applying them.

Observing traffic sources, destinations, and flows is imperative; as Kubernetes API does not include statistics, learning how to use the monitoring/troubleshooting tools of the network plugin becomes very important.

Folks at Cilium also developed a great UI Network Policy editor; make sure to check it out.

Further Reading

Photo by Marek Piwnicki on Unsplash

Top comments (0)