DEV Community

Cover image for Understanding Node Affinity in Kubernetes
Jensen Jose
Jensen Jose

Posted on

Understanding Node Affinity in Kubernetes

Welcome back to our Kubernetes series! In this instalment, we delve into an essential scheduling concept: Node Affinity. Node Affinity allows Kubernetes to schedule pods based on node labels, ensuring specific workloads run on designated nodes.

Recap: Taints and Tolerations

Previously, we explored Taints and Tolerations, which allow nodes to repel or accept pods based on certain conditions like hardware constraints or other node attributes. However, Taints and Tolerations have limitations when it comes to specifying multiple conditions or ensuring pods are scheduled on specific nodes.

Introducing Node Affinity

Node Affinity addresses these limitations by enabling Kubernetes to schedule pods onto nodes that match specified labels. Let's break down how Node Affinity works:

Matching Pods with Node Labels

Consider a scenario with three nodes labeled based on their disk types:

  • Node 1: disk=HDD
  • Node 2: disk=SSD
  • Node 3: disk=SSD

We want to ensure:

  • HDD-intensive workloads run only on Node 1.
  • SSD-intensive workloads are scheduled on Node 2 or Node 3.

Affinity Rules

Node Affinity uses rules like requiredDuringSchedulingIgnoredDuringExecution and preferredDuringSchedulingIgnoredDuringExecution to define scheduling behaviours:

  • requiredDuringSchedulingIgnoredDuringExecution: Ensures a pod is only scheduled if a matching node label is found during scheduling.
  • preferredDuringSchedulingIgnoredDuringExecution: Prefers to schedule a pod on a node with matching labels but can schedule on other nodes if no match is found.

Example: YAML Configuration

apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec:
  containers:
    - name: app
      image: nginx
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: disk
                operator: In
                values:
                  - SSD
                  - HDD
Enter fullscreen mode Exit fullscreen mode

In this example, the pod example-pod is configured to run on nodes labeled with disk=SSD or disk=SDD.

Practical Demo

We applied this configuration and observed how Kubernetes scheduled pods based on node labels. Even when labels were modified post-scheduling, existing pods remained unaffected, showcasing Node Affinity's robustness in maintaining pod placement integrity.

Key Differences from Taints and Tolerations

While Taints and Tolerations focus on node acceptance or rejection based on predefined conditions, Node Affinity ensures pods are scheduled on nodes that specifically match given criteria. This distinction is crucial for workload optimization and resource allocation in Kubernetes clusters.

Conclusion

Node Affinity enhances Kubernetes scheduling capabilities by allowing fine-grained control over pod placement based on node attributes. Understanding and effectively utilizing Node Affinity can significantly improve workload performance and cluster efficiency.

Stay tuned for our next installment, where we'll explore Kubernetes resource requests and limits—a critical aspect of optimizing resource utilization in your Kubernetes deployments.

For further reference, check out the detailed YouTube video here:

Top comments (0)