Here we go with another video and fresh labs we need to do. But before we get going, let’s discuss the differences between Node Affinity and Taints and Tolerations because I’ve got to keep it real, I was (more than) a bit confused.
The key difference between Node Affinity and Taints and Tolerations lies in how they control pod placement on nodes:
Node Affinity: Specifies where pods can be scheduled based on node labels. It uses "required" or "preferred" criteria to influence pod placement.
Taints and Tolerations: Used to prevent pods from being scheduled on certain nodes unless they explicitly "tolerate" the node’s taint. Taints are applied to nodes, and tolerations are applied to pods to allow them to be scheduled on tainted nodes.
Exercises
1. Create a Pod with Node Affinity
- Create a pod with
nginx
as the image. - Add a Node Affinity rule with the property
requiredDuringSchedulingIgnoredDuringExecution
, setting the conditiondisktype = ssd
.
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- ssd
2. Check Pod Status
- Check the status of the pod to understand why it’s not being scheduled.
Command:
kubectl get pods -o wide
Output:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 0/1 Pending 0 3m55s <none> <none> <none> <none>
Explanation:
The issue is that we haven't labeled any nodes with disktype=ssd
.
3. Add Node Label
- Add the label
disktype=ssd
to the nodekind-cka-cluster-worker
. - Verify the pod status again to ensure it has been scheduled on
worker01
.
Command to Add the Label:
kubectl label nodes kind-cka-cluster-worker disktype=ssd
Recheck Pod Status:
To verify the change, run:
kubectl get pods -o wide
You should see the nginx
pod in the Running state, with the NODE
column showing kind-cka-cluster-worker
.
4. Create a Second Pod with Node Affinity
- Create a new pod configuration with
redis
as the image. - Add a Node Affinity rule with the property
requiredDuringSchedulingIgnoredDuringExecution
, settingdisktype
as the condition without specifying a value.
To accomplish this:
cat pod.yml > redis.yml
or
Edit the redis.yml
file:
- Change the
name
toredis
. - Update the
image
toredis
. - Modify the
nodeAffinity
rule by removing thevalues
section and changing the operator to Exists.
The final configuration should look like this:
apiVersion: v1
kind: Pod
metadata:
name: redis
spec:
containers:
- name: redis
image: redis
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: Exists
Do not deploy this pod yet—we will first ensure that the cluster is prepared for this configuration.
Add the Label disktype
(with no value) to worker02
Node:
To add a label with just the key disktype
and no value to the node kind-cka-cluster-worker2
, run the following command:
kubectl label nodes kind-cka-cluster-worker2 disktype=
Verify the Node Labels:
Check if the label has been successfully added to the node:
kubectl get nodes --show-labels
Look for the disktype
label under LABELS
for kind-cka-cluster-worker2
.
Deploy the Redis Pod:
Ensure that the redis.yml
file is configured with the nodeAffinity
rule that requires the disktype
key to exist.
Deploy the pod:
kubectl apply -f redis.yml
Check Pod Scheduling:
Verify that the redis
pod is now scheduled on worker02
:
kubectl get pods -o wide
The output should show kind-cka-cluster-worker2
in the NODE
column for the redis
pod.
Tags and Mentions
- @piyushsachdeva
- Day 15: Video Tutorial
Top comments (0)