DEV Community

Cover image for Kubernetes Network Policies
CiCube for CICube

Posted on • Originally published at cicube.io

Kubernetes Network Policies

Introduction

Kubernetes has become, in itself, the most important platform for container orchestration in these modern cloud-native environments. However, one of the critical pieces of functionality involves managing network traffic between pods. Kubernetes Network Policies introduce a way to define the rules that govern the communication paths both within a cluster and with external entities. In this post, we will dive into the inner details of how Network Policies work and provide some insights on how one could go about building robust and secure Kubernetes deployments.

Understanding the Scope of Network Policies

The first thing that comes to my mind when considering Kubernetes Network Policies is control over network traffic at an IP and port level. They allow the normal process of communications between pods and can regulate external access so that only authorized persons or things can actually communicate with your services. Since the respective plugins enforce the exact policies, it is very critical to choose the right compatible plugin.

Network Policies target directly the pods, using basically three types of identifiers: other pods, namespaces, and IP blocks. For instance, you can specify which other pods can communicate with a certain pod, according to their labels. Thus, creating a secure communication environment. Let's say I have a pod labeled "app=backend" that should only receive traffic from other pods in the "production" namespace and specific IP addresses within a certain range.

Here is how the NetworkPolicy could control that traffic:

That would mean, in a namespace, if there are no policies, all the pods can 'reach out' to any, which is why proper policies matter. If I have a need in my production deployment to restrict access, I would define clear ingress and egress rules based on the pods, allowing controlled interaction between services across the cluster. The goal of this section is to get into more detail regarding the types of isolation, which are available for pods that Kubernetes Network Policies are able to enforce. More precisely, I'm talking about ingress and egress. What I mean here by "isolation" is a lack of communications that can be directed to or from certain pods.

By default, pods have no isolation; they permit all ingress and egress communications. But if applied-a NetworkPolicy which specifies egress or ingress-, then those rules come into play. For example, if I apply a policy to a pod that only allows connections from a particular namespace or IP block, then that pod becomes isolated from all other traffic not specified in the policy.

Now, let me put it this way: An example would be, in case I have an egress rule that permits traffic only to an external IP range, say 10.0.0.0/24, and an ingress rule that enables incoming connections from the namespace 'frontend'. So, in other words, a sandbox environment where Pod A is not able to make communication with Pod B whenever both of these two force it via a policy. Such configurations ensure that connections can aggregate into a clear communication path, therefore giving room to tight security within my Kubernetes cluster.

Creating a Kubernetes NetworkPolicy

In the following example, I will show how to create a Kubernetes NetworkPolicy. By controlling the ingress and egress traffic, we can continue to have a more secure environment. The following is a snippet of a sample NetworkPolicy in YAML that uses selectors to logically control the network traffic:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: example-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - ipBlock:
        cidr: 172.16.0.0/16
        except:
        - 172.16.1.0/24
    - namespaceSelector:
        matchLabels:
          project: frontend
    - podSelector:
        matchLabels:
          role: frontend
    ports:
    - protocol: TCP
      port: 5432
  egress:
  - to:
    - ipBlock:
        cidr: 10.0.0.0/24
    ports:
    - protocol: TCP
      port: 8080
Enter fullscreen mode Exit fullscreen mode

In the example above, we set a podSelector that selects all pods labeled with role=db. The policyTypes dictate that this Policy controls the ingress and egress.

  • For ingress, it allows traffic to come from the following three sources:

    1. IP Block/CIDR Block: 172.16.0.0/16, excluding 172.16.1.0/24.
    2. All pods within the frontend namespace as determined by the namespaceSelector.
    3. Pods chosen by the role frontend in the currently selected namespace.
  • For the egress, it allows traffic for going to the CIDR range 10.0.0.0/24 on port 8080. These settings demonstrate how the additive nature of policies works: If this policy is one of several assigned to a pod, then the effective connections will be the union of all matching ingress and egress settings.

Default Behavior of Kubernetes Traffic Policies

In this chapter, I'll discuss the default behavior of Kubernetes traffic policies in a case when no specific NetworkPolicy was implemented. All pods in a namespace, by default, accept ingress and egress traffic unless instructed otherwise. This may lead to undesired effects for potential security risks if such unwanted traffic is let through. We will be able to make 'default deny' policies that restrict all ingress and egress traffic.

Implementing Default Deny Policies

Here is a YAML configuration that could be used to apply the default deny policy for ingress traffic:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
  namespace: default
spec:
  podSelector: {}
  policyTypes:
  - Ingress
Enter fullscreen mode Exit fullscreen mode

This policy denies all ingress traffic to all pods within the namespace unless specified by another policy. Similarly, for denying all egress traffic, the following YAML can be used:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-egress
  namespace: default
spec:
  podSelector: {}
  policyTypes:
  - Egress
Enter fullscreen mode Exit fullscreen mode

These policies give us a secure default of deny all traffic, and we create necessary exceptions by allowing select traffic with more granular NetworkPolicies. If you'd like to have a default allow all traffic policy, you can create an allow-all policy along the lines of:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all-ingress
  namespace: default
spec:
  podSelector: {}
  ingress:
  - {}
  policyTypes:
  - Ingress
Enter fullscreen mode Exit fullscreen mode

This will make sure that all pods receive incoming traffic. Setting strong base policies is of strategic importance for any Kubernetes network security strategy.

Advanced Policy Features

One of the powerful features of Network Policies is the ability to target specific ranges of ports. You do this easily by using the endPort field in your NetworkPolicy, where you can specify a range of ports. Here is an example:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: multi-port-egress
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
  - Egress
  egress:
  - to:
    - ipBlock:
        cidr: 10.0.0.0/24
      ports:
      - protocol: TCP
        port: 32000
        endPort: 32768
Enter fullscreen mode Exit fullscreen mode

This policy allows outbound traffic to any IP in the range 10.0.0.0/24 for TCP connections from pods labeled with the key role and value db, provided the destination port falls within this range.

Furthermore, Kubernetes enables targeting multiple namespaces via the namespaceSelector, thus allowing greater flexibility in policy. Example:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: egress-namespaces
spec:
  podSelector:
    matchLabels:
      app: myapp
  policyTypes:
  - Egress
  egress:
  - to:
    - namespaceSelector:
        matchExpressions:
          - key: namespace
            operator: In
            values: ["frontend", "backend"]
Enter fullscreen mode Exit fullscreen mode

Current Limitations

Despite all the flexibility, there are some existing limitations in the Kubernetes Network Policies that a developer should be aware of. One big limitation is the lack of native target options like service-level targeting. It means directly at the service layer, you can't allow or deny the traffic by name; instead, it often involves workarounds like appropriate label usage or IP ranges.

Another limitation is that there isn't a default policy framework that applies universally across all your namespaces and pods. You may have to create default deny or allow policies manually at each namespace.

Recent Enhancements

Recent versions of Kubernetes added several important features, such as label targeting for namespaces. This allows policy flexibility and gives the ability to provide much more specific network rules that will effectively improve security in your clusters.

Best Practices

Network Policies drive one to think out of the box from basic use cases. Overlapping policies and dynamic changes may cause issues with timings. Use init containers to make your deployments resilient to check network connectivity requirements before application containers start. It should be constantly monitored and verified, since this would bring confidence that the expectation about security matches reality across pod life cycles.

Conclusion

Kubernetes Network Policies form one of the core building blocks for establishing a secure and efficient networking setup within the cluster. Therefore, these network policies support both the developer and the operator in better control over network interaction, enhancement of security, and optimization of resources. It enables overcoming challenges like ensuring timely application in events related to the lifecycle of the pod and brings about significant improvement in the robustness of Kubernetes once mastered. Continue to investigate innovative techniques to continue driving full advantage of these capabilities in your infrastructure.

Top comments (0)