DEV Community

Cover image for KUBERNETES FROM ZERO TO HERO (PART 2) - KUBERNETES ARCHITECTURE πŸ’₯πŸ”₯
Samuel Ogunmola
Samuel Ogunmola

Posted on

KUBERNETES FROM ZERO TO HERO (PART 2) - KUBERNETES ARCHITECTURE πŸ’₯πŸ”₯

In the last part of this series, we learnt that Kubernetes is a powerful open-source container orchestration system for automating the deployment, scaling, and management of containerized applications. It provides a set of APIs and tools that can be used to deploy, scale, and manage containerized applications across a cluster of servers. Today we are going to be learning how different parts of kubernetes works together. We are going to see the architecture of kubernetes and how it's components works together to help kubernetes do it's work. Are you with me? Let's get started right away πŸ‘.

At a high level, the architecture of Kubernetes consists of a set of master nodes and worker nodes.

Image description

The master nodes or control planes are responsible for managing the worker nodes and the resources that run on them. The worker nodes are responsible for running the containers of an application.

Here is a more detailed breakdown of the components of the Kubernetes architecture ✨⭐️:

Image description

The worker node consists of three components or put in a better way, three processes. And they are:

  • Container Runtime
  • Kubelet and,
  • Kube-proxy

while the Master node has four processes and they are:

  • API Server
  • Scheduler
  • Controller-Manager
  • etcd

Now that we know these, lets look into each of these components in detail.

βœ¨πŸŽ–οΈ. WORKER NODE PROCESSES

  • Container Runtime: the container runtime is responsible for running the containers of an application. It is the software that is responsible for creating and managing the containers, as well as communicating with the operating system to allocate resources and perform other tasks.
    Kubernetes supports a number of container runtimes, including Docker, containerd, and CRI-O. Docker is the most widely used container runtime and is the default runtime for Kubernetes. It provides a lightweight, standalone execution environment for containers and is easy to use and integrate with other tools.
    Containerd is an open-source container runtime that is designed to be lightweight and modular. It provides a stable and consistent runtime for containers and is used by many cloud providers and container orchestrators, including Kubernetes.
    CRI-O is an open-source container runtime that is designed to be used with Kubernetes. It is based on containerd and is optimized for use with Kubernetes, providing a lightweight and stable runtime for containers.

  • Kubelet: The kubelet is a daemon that runs on each worker node and is responsible for managing the pods and containers on that node. It communicates with the Kubernetes API server to receive instructions and report the status of the pods and containers.
    The kubelet is responsible for several key tasks, including:

  • Running and maintaining the desired number of replicas of a pod: The kubelet ensures that the desired number of replicas of a pod are running at any given time. If a pod goes down, it will create a new one to replace it.
  • Communicating the status of the pods and containers to the API server: The kubelet reports the status of the pods and containers to the API server, including their resource usage and health status.
  • Mounting volumes and secrets: The kubelet is responsible for mounting volumes and secrets onto the pods as specified in the pod configuration.
  • Managing the containers of a pod: The kubelet is responsible for starting, stopping, and restarting the containers of a pod as needed. It also monitors the health of the containers and restarts them if necessary.
  • Managing the network namespace of a pod: The kubelet is responsible for setting up the network namespace of a pod and configuring the network interfaces, routes, and firewall rules as needed.
  • Kube-proxy: The kube-proxy is a daemon that runs on each worker node and is responsible for implementing the network proxy and load balancer functions for the node. It is used to forward traffic to the appropriate pods and services based on the destination IP and port. In simple words, they help services do the work of loadbalancing. The kube-proxy works in conjunction with the kube-apiserver to ensure that the desired state of the cluster is maintained. It listens for changes to the services and endpoints in the cluster and updates the iptables rules on the node accordingly. This allows it to load balance traffic to the correct pods and services and ensure that traffic is routed correctly within the cluster. The kube-proxy supports a number of different modes of operation, including userspace, iptables, and ipvs. The default mode is iptables, which uses the Linux kernel's built-in packet filtering system to forward traffic. The userspace and ipvs modes use userspace software to forward traffic and offer additional features, such as direct server return and shared IPs.

Now lets look at the master processes

βœ¨πŸ† MASTER NODE PROCESSES

  • API Server: The API server is the central point of communication between the master nodes and the worker nodes. It exposes a RESTful API that can be used to manage the resources of the cluster, such as pods, services, and deployments. The API server is responsible for several key tasks, including:
  • Receiving and processing API requests: The API server receives API requests from clients, such as kubectl or Kubernetes client libraries, and processes them to create, update, or delete resources in the cluster.
  • Storing the desired state of the cluster: The API server stores the desired state of the cluster in etcd, a distributed key-value store. It ensures that the actual state of the cluster matches the desired state by reconciling any discrepancies and making the necessary changes.
  • Validating and authorizing API requests: The API server validates and authorizes API requests to ensure that only authorized clients can make changes to the cluster. It uses a combination of built-in and pluggable authentication and authorization modules to enforce access controls.
  • Serving the API: The API server exposes a number of endpoints that allow clients to interact with the resources of the cluster. It serves the API over HTTPS and can be configured to use TLS certificates for secure communication.
  • Scheduler: The scheduler is a component that is responsible for scheduling pods to run on the worker nodes of a cluster. It takes into account the available resources of the worker nodes and the resource requirements of the pods to determine the best place to run them.
    The scheduler works in conjunction with the API server and the kubelets to ensure that the desired state of the cluster is maintained. It receives pod scheduling requests from the API server and assigns them to a worker node based on the available resources and the resource requirements of the pod. It also communicates with the kubelets to ensure that the pods are actually running on the assigned worker nodes.
    The scheduler can be configured to use various scheduling algorithms and policies to make scheduling decisions. For example, you can specify constraints on where a pod can be scheduled, such as on a particular node or in a particular region. You can also specify resource requirements for a pod, such as the amount of CPU and memory it needs, and the scheduler will try to find a worker node that can accommodate those requirements.

  • Contoller-Manager: In Kubernetes, the controller manager is a daemon that runs on the master nodes and is responsible for managing the controllers in the cluster. Controllers are responsible for maintaining the desired state of the cluster by reconciling any discrepancies between the actual state and the desired state.
    The controller manager is responsible for starting and stopping the controllers, as well as monitoring their health and restarting them if necessary. It also communicates with the API server to receive updates about the resources in the cluster and takes action to ensure that the desired state is maintained.
    There are several types of controllers in Kubernetes, including:

  • ReplicationController: Ensures that the desired number of replicas of a pod are running at any given time.
  • Deployment: Provides a way to update the pods in a rolling fashion, ensuring that the desired number of replicas are available at all times.
  • StatefulSet: Manages the deployment and scaling of a set of pods that have persistent storage.
  • DaemonSet: Ensures that a copy of a pod is running on all or a subset of the worker nodes.
  • etcd: etcd is a distributed key-value store that is used to store the configuration data of a Kubernetes cluster. It is used to store the desired state of the cluster and is used by the API server to ensure that the actual state of the cluster matches the desired state. etcd is a highly available and distributed system that can be deployed on a cluster of machines. It stores the data in a replicated log, which allows it to maintain consistency across the cluster and recover from failures. It also provides a number of features to support distributed systems, such as leader election and distributed locks. In Kubernetes, etcd is used to store a wide range of data, including the desired state of the pods, services, and deployments in the cluster. It is also used to store the configuration of the master nodes and the worker nodes, as well as the network and security policies of the cluster.

Overall, the architecture of Kubernetes consists of a set of master nodes and worker nodes that are responsible for managing and running the containers of an application. The API server, etcd, scheduler, and kubelet are key components that work together to ensure that the desired state of the cluster is maintained and that the containers of an application are running as intended. Understanding the architecture of Kubernetes is essential for effectively deploying and managing applications in a cluster πŸ˜‹.

πŸ’Ž Note: This is the part 2 of the "Kubernetes: From Zero to Hero" Series. If you want to become a DevOps engineer then you should join our online community here for more content like this. Merry Christmas πŸŽ…

Top comments (0)