When you deploy a pod, are doing a cluster update or you are just cleaning a node, you can have pods stuck in a ""Pending" state.
If you describe the pod, you may see an event with the follow error :
2 node(s) had volume node affinity conflict.
This error means that the node where your pod is deployed doesn't match with the affinity of a volume attached to the pod.
For example, you have deployed your services in AWS and setup high availability for your volumes & services. Using AZ us-east-1a & us-east-1b in our case.
If you block the deployment on all nodes in us-east-1a and you have volumes which requires to be in us-east-1b, you will have this issue.
How to find the volume affinity
To find the volume affinity, you need to find the PersistentVolume linked to your pod and check its definition.
You can start from the Pod to find the associated PersistentVolumeClaim
$ kubectl describe pod my-pod
Name: my-pod
Namespace: default
(...)
Volumes:
volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: volume-my-prod
ReadOnly: false
(...)
Then from the PersistentVolumeClaim, find the PersistentVolume
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
volume-my-prod Bound pvc-toto 200Gi RWO ebs-gp2 41d
And then, describe the PersistentVolume
$ kubectl get pv pvc-toto -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pvc-toto
spec:
(...)
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: topology.ebs.csi.aws.com/zone
operator: In
values:
- us-east-1a
(...)
In this case, we can see that the volume needs to be in the us-east-1a AZ, so the related pod too.
Now that you have this information, you should be able to understand what is going wrong with the current node and resolve the issue.
I hope it will help you! 🍺
Top comments (0)