DEV Community

Cover image for Kubernetes Interview Questions: Kubernetes Architecture: Node
Dmitrii Kotov
Dmitrii Kotov

Posted on

Kubernetes Interview Questions: Kubernetes Architecture: Node

What are Nodes in Kubernetes?

Nodes are where Kubernetes places containers into Pods to run. A node may be a virtual or physical machine, depending on the cluster.

Who manages each node in a Kubernetes cluster?

Each node is managed by the control plane and contains the services necessary to run Pods.

What are the key components on a node in Kubernetes?

The components on a node include the kubelet, a container runtime, and the kube-proxy.

What are the two main ways to add Nodes to the API server in Kubernetes?

The two main ways are: 1) The kubelet on a node self-registers to the control plane, and 2) You (or another human user) manually add a Node object.

What happens after a Node object is created or a kubelet self-registers in Kubernetes?

After a Node object is created, or a kubelet on a node self-registers, the control plane checks whether the new Node object is valid.

When is a node considered eligible to run a Pod in Kubernetes?

A node is eligible to run a Pod if it is healthy, meaning all necessary services are running.

What happens if a node is not healthy in Kubernetes?

If the node is not healthy, it is ignored for any cluster activity until it becomes healthy. Kubernetes keeps the object for the invalid Node and continues checking to see whether it becomes healthy.

How can you stop health checking on an invalid Node in Kubernetes?

You, or a controller, must explicitly delete the Node object to stop that health checking.

What is the requirement for the name of a Node object in Kubernetes?

The name of a Node object must be a valid DNS subdomain name.

What issues can arise if a Node instance is modified without changing its name?

This may lead to inconsistencies if an instance was modified without changing its name.

What should be done if a Node needs to be replaced or updated significantly in Kubernetes?

If the Node needs to be replaced or updated significantly, the existing Node object needs to be removed from the API server first and re-added after the update.

What is the preferred pattern for the registration of Nodes in Kubernetes?

The preferred pattern, used by most distros, is self-registration of Nodes.

What is a good practice when Node configuration needs to be updated in Kubernetes?

It is a good practice to re-register the node with the API server when Node configuration needs to be updated.

What issues can arise if the Node configuration is changed on kubelet restart while Pods are already scheduled on the Node?

Pods already scheduled on the Node may misbehave or cause issues if the Node configuration is changed on kubelet restart.

Why is Node re-registration important in the context of updating Node configuration?

Node re-registration ensures all Pods will be drained and properly re-scheduled, maintaining the integrity of the cluster’s operations.

How can you create and modify Node objects in Kubernetes?

You can create and modify Node objects using kubectl.

What should you set the kubelet flag to when creating Node objects manually?

When you want to create Node objects manually, set the kubelet flag --register-node=false.

What does marking a node as unschedulable do?

Marking a node as unschedulable prevents the scheduler from placing new pods onto that Node but does not affect existing Pods on the Node. This is useful as a preparatory step before a node reboot or other maintenance.

How do you mark a Node as unschedulable in Kubernetes?

To mark a Node unschedulable, you run the command kubectl cordon $NODENAME.

What is a special case where Pods will still run on an unschedulable Node?

Pods that are part of a DaemonSet tolerate being run on an unschedulable Node. DaemonSets typically provide node-local services that should run on the Node even if it is being drained of workload applications.

What information does a Node's status contain in Kubernetes?

A Node's status contains the following information: Addresses, Conditions, Capacity and Allocatable, and Info.

How can you view a Node's status and other details in Kubernetes?

You can view a Node's status and other details using the command kubectl describe node <insert-node-name-here>.

What is the purpose of heartbeats sent by Kubernetes nodes?

Heartbeats sent by Kubernetes nodes help the cluster determine the availability of each node, and to take action when failures are detected.

What are the two forms of heartbeats for nodes in Kubernetes?

The two forms of heartbeats for nodes are: 1) Updates to the .status of a Node, and 2) Lease objects within the kube-node-lease namespace.

What is the role of the node controller in Kubernetes?

The node controller is a Kubernetes control plane component that manages various aspects of nodes.

What is the first role of the node controller in a node's life?

The first role is assigning a CIDR block to the node when it is registered, if CIDR assignment is turned on.

What happens if the node controller finds that the VM for an unhealthy node is not available?

If the VM for an unhealthy node is not available, the node controller deletes the node from its list of nodes.

What is one of the responsibilities of the node controller regarding node health?

The node controller is responsible for updating the Ready condition in the Node's .status field if a node becomes unreachable, setting it to Unknown.

What action does the node controller take if a node remains unreachable?

If a node remains unreachable, the node controller triggers API-initiated eviction for all of the Pods on the unreachable node.

How long does the node controller wait before submitting the first eviction request for an unreachable node?

By default, the node controller waits 5 minutes between marking the node as Unknown and submitting the first eviction request.

How often does the node controller check the state of each node by default?

By default, the node controller checks the state of each node every 5 seconds.

Can the period for checking the state of each node by the node controller be configured?

Yes, this period can be configured using the --node-monitor-period flag on the kube-controller-manager component.

What is the default eviction rate limit set by the node controller?

The node controller limits the eviction rate to --node-eviction-rate (default 0.1) per second, meaning it won't evict pods from more than 1 node per 10 seconds.

How does the node eviction behavior change when a node in an availability zone becomes unhealthy?

When a node in an availability zone becomes unhealthy, the node controller checks the percentage of unhealthy nodes and may reduce the eviction rate if a certain threshold of unhealthy nodes is reached.

What happens to the eviction rate if the fraction of unhealthy nodes is at least --unhealthy-zone-threshold?

If the fraction of unhealthy nodes is at least --unhealthy-zone-threshold (default 0.55), then the eviction rate is reduced.

What is the node eviction policy in small clusters?

In small clusters (less than or equal to --large-cluster-size-threshold nodes, default 50), evictions are stopped.

How is the eviction rate adjusted in larger clusters with unhealthy nodes?

In larger clusters with unhealthy nodes, the eviction rate is reduced to --secondary-node-eviction-rate (default 0.01) per second.

Does the node controller consider per-zone unavailability if the cluster does not span multiple cloud provider availability zones?

If the cluster does not span multiple cloud provider availability zones, then the eviction mechanism does not take per-zone unavailability into account.

What happens to the eviction rate if all nodes in a zone are unhealthy?

If all nodes in a zone are unhealthy, the node controller evicts at the normal rate of --node-eviction-rate.

What is the node controller's policy for evictions in cases where all zones are completely unhealthy?

If all zones are completely unhealthy, the node controller assumes a connectivity issue and does not perform any evictions.

What type of information about resource capacity do Node objects track in Kubernetes?

Node objects track information about the Node's resource capacity, such as the amount of memory available and the number of CPUs.

How do Nodes that self-register report their capacity?

Nodes that self-register report their capacity during the registration process.

What does the Kubernetes scheduler ensure regarding resources on a Node?

The Kubernetes scheduler ensures that there are enough resources for all the Pods on a Node.

How does the scheduler determine if a Node has enough resources?

The scheduler checks that the sum of the requests of containers on the node is no greater than the node's capacity.

What is excluded from the sum of requests when the scheduler checks a Node's capacity?

The sum of requests excludes any containers started directly by the container runtime and any processes running outside of the kubelet's control.

Where can you find information about reserving resources for non-Pod processes?

Information about explicitly reserving resources for non-Pod processes can be found in the section "reserve resources for system daemons."

What does the kubelet do during a node system shutdown in Kubernetes?

The kubelet attempts to detect node system shutdown and terminates pods running on the node.

How does the kubelet handle pods during a node shutdown?

During a node shutdown, the kubelet ensures that pods follow the normal pod termination process and does not accept new Pods.

What does the Graceful node shutdown feature depend on?

The Graceful node shutdown feature depends on systemd, using systemd inhibitor locks to delay the node shutdown.

Is the GracefulNodeShutdown feature gate enabled by default in Kubernetes?

Yes, the GracefulNodeShutdown feature gate is enabled by default in Kubernetes.

How is the node marked during a shutdown, and how does the kube-scheduler respond?

Once systemd detects a node shutdown, the kubelet sets a NotReady condition on the Node with the reason "node is shutting down". The kube-scheduler honors this condition and does not schedule any new Pods onto the affected node.

How are pods terminated by the kubelet during a graceful shutdown?

During a graceful shutdown, the kubelet terminates pods in two phases: first, it terminates regular pods running on the node, and then it terminates critical pods.

What happens to the Node and Pods if a node termination is cancelled?

If node termination is cancelled, the Node returns to the Ready state, but Pods that started the termination process will not be restored and need to be re-scheduled.

How are pods marked and shown in kubectl when evicted during a graceful node shutdown?

Pods evicted during a graceful node shutdown are marked as shutdown, with their status shown as Terminated in kubectl get pods, and kubectl describe pod indicating that the pod was terminated due to imminent node shutdown.

What are the two phases of pod shutdown in the Graceful Node Shutdown feature?

The Graceful Node Shutdown feature shuts down pods in two phases: non-critical pods, followed by critical pods.

What happens to pods that are part of a StatefulSet during an undetected node shutdown?

During an undetected node shutdown, pods that are part of a StatefulSet get stuck in terminating status on the shutdown node and cannot move to a new running node.

Why can't the StatefulSet create a new pod with the same name during an undetected node shutdown?

The StatefulSet cannot create a new pod with the same name because the kubelet on the shutdown node is not available to delete the pods.

What happens to the volumes used by pods during an undetected node shutdown?

If there are volumes used by the pods, the VolumeAttachments will not be deleted from the original shutdown node, so the volumes used by these pods cannot be attached to a new running node.

What are the two phases of pod termination during a non-graceful shutdown?

The two phases are: 1) Force delete the Pods that do not have matching out-of-service tolerations, and 2) Immediately perform detach volume operations for such pods.

What is a potential warning regarding the use of swap memory in Kubernetes?

When the memory swap feature is turned on, there is a risk that Kubernetes data such as the content of Secret objects written to tmpfs could be swapped to disk.

How can a user configure how a node uses swap memory?

A user can configure the node's use of swap memory by setting memorySwap.swapBehavior, for example, to UnlimitedSwap or LimitedSwap.

What is the support status of swap with different cgroup versions?

Swap is supported only with cgroup v2, and cgroup v1 is not supported.

What does the process of safely draining a node involve?

Safely draining a node involves using kubectl drain to safely evict all pods from a node before performing maintenance, optionally respecting the PodDisruptionBudget defined.

What does kubectl drain do?

kubectl drain safely evicts all pods from a node, respecting the PodDisruptionBudgets specified, in preparation for maintenance like kernel upgrades or hardware maintenance.

How do you drain a node with DaemonSet pods?

When draining a node with DaemonSet pods, use kubectl drain --ignore-daemonsets <node name> as the DaemonSet controller immediately replaces missing Pods with new ones.

What should you do with the node during maintenance and after it's completed?

During maintenance, power down the node or delete its VM. After maintenance, if the node remains in the cluster, use kubectl uncordon <node name> to resume scheduling new pods onto the node.

Can you drain multiple nodes in parallel?

Yes, you can run multiple kubectl drain commands for different nodes in parallel, and they will still respect the PodDisruptionBudget specified.

What alternative is there to using kubectl drain for evictions?

As an alternative to kubectl drain, you can programmatically cause evictions using the eviction API for finer control over the pod eviction process.

What information does a Node's status contain in Kubernetes?

A Node's status contains Addresses, Conditions, Capacity and Allocatable, and Info.

What are the different types of Addresses found in a Node's status?

The types of Addresses include HostName, ExternalIP, and InternalIP, which vary depending on the cloud provider or bare metal configuration.

How are taints related to node conditions in Kubernetes?

When problems occur on nodes, Kubernetes automatically creates taints matching the conditions affecting the node, like node.kubernetes.io/unreachable or node.kubernetes.io/not-ready.

Top comments (0)