Kubernetes is a container orchestration tool that has revolutionized the way we deploy and manage containerized applications. It is an open-source platform that automates container deployment, scaling, and management. Kubernetes architecture is designed to make container orchestration easier, faster, and more efficient
Container orchestration refers to the process of managing complex configurations of various containers using server administration and management code. It supports automation of tasks such as clustering server resources, container deployment and management, service discovery and access, load handling, and fault recovery
Docker Swarm is also one of the popular container orchestration tools, with Kubernetes being the de facto standard
In typical container virtualization, deployment and operation management consume a lot of resources, making efficient container management impossible.
By using container orchestration tools, clustered servers can be managed in a centralized manner, greatly reducing management resources.
With container orchestration tools, automated deployment and scaling, rollouts/rollbacks, and container recovery operations can be implemented.
▣ Example: Building container images using only container virtualization and deploying them to multiple servers
- Kubernetes is a de facto standard container orchestration tool that was open-sourced by Google in 2014
- Kubernetes supports most of the functions required for container-based service operation, such as container deployment in MSA structure and service fault recovery.
- Kubernetes has high scalability as it supports most of the functions and components required for operation in a cloud environment and can be easily integrated with other cloud operating tools.
- It has high reliability and stability as it is developed and maintained by an open-source project involving companies such as Google and Red Hat.
- Most of the various components that make up the container orchestration are developed and updated based on Kubernetes.
- Currently, Kubernetes is an open-source project managed by the Cloud Native Computing Foundation (CNCF) (URL: https://landscape.cncf.io/)
So what are the components included in the Kubernetes Cluster?
💡 A cluster refers to a logical binding of several servers configured to be used as if they were one server
Kubernetes composes of the
Master and its
- Master node has different core components that make up a “Kubernetes control Plane”
- The core components include an API Server, Controller Manager, Scheduler and etcd
- Master node is used for administrative tasks only
- It manages the entire Kubernetes cluster, assigning scheduled tasks to the worker nodes, managing the health of the system, scaling applications to go up and down
- Essentially its the brain of the Kubernetes Cluster
Let’s look into the Components included in the Master Node
→ kube-API Server: The API Server refers to a server that provides APIs to control internal resources within the Kubernetes cluster (The API Server must be accessible from outside the Kubernetes cluster)
→ kube-scheduler: The scheduler is responsible for scheduling resource requests for Kubernetes resources. It selects the optimal node to handle resource requests that require node allocation by examining the status of the worker nodes that make up the Kubernetes cluster.
→ kube-controller-mananger: It watches the state of the Kubernetes cluster and makes changes to bring the state of the Kubernetes cluster back to the desired state.
etcd is a distributed key-value store used to store configuration data for the Kubernetes cluster. It is used to store the state of the Kubernetes cluster, such as node status and service status.
💡 Also, it keeps the cluster’s current and its desired states so if the Kubernetes find any distinguishes between the two states - it will apply the desired state
There are several Add-ons inside the Control Plane that can further activate some extra features. These add on components are:
- Metrics-server — It collects resource usage status of the nodes inside the K8s cluster by collecting the metrics such as CPU and memory usage
- Core-DNS — A DNS server used within the cluster
- Dashboard — Provides a GUI web-based dashboard for managing the cluster
- It was previously known as the
- Worker nodes run the applications and workloads of the Kubernetes cluster.
- It is responsible for running containers and handling the container runtime environment.
- The core components include Kubelet, Container runtime and Kube-proxy
Looking into the main components of the worker node:
→ kubelet: It is the main Kubernetes component that communicates with the API server of the Master node to register nodes in a Kubernetes cluster. It manages the lifecycle of pods and also monitors the status of nodes and pods.
💡 The kubelet communicates to Docker(or another container runtime) daemon via its API to create and manage containers. After any changes in a Pod on a Node – it will send them to the API server, which in its turn will save them to the
→ kube-proxy: It is a network proxy that runs on every worker node and generates and manages network rules for each node. It is just like a reverse-proxy that forwards requests to appropriate service or application inside a K8s private network
→ Container Runtime: It runs on all worker nodes and is responsible for running and managing containers (Docker in almost all cases)
So basically the workflow looks like the following:
The engineer/developer creates a manifest file that describes the desired state of the application or workload that has to be deployed in the cluster. It contains details about the containers, volumes, networking and other resources that are required for the application💡 The manifest file uses the YAML format as it is human-readable and easy to understand
The manifest file is applied using the
kubectlcommand-line tool that communicates with the K8s API-server
The API server receives the manifest file and validates it for correctness and compliance with the Kubernetes API schema. If the manifest file is valid, it stores the desired state of the application in
The scheduler component of the Master node continuously monitors the state of the Kubernetes cluster and the available resources on each worker node. When a new pod needs to be scheduled, the scheduler queries the API server to obtain the current state of the cluster along with the current state of the worker nodes and the desired state.
The scheduler selects a suitable worker node according to the scheduling policy and then instructs the API server to create the pod on that node. The API server then updates the desired state of the cluster in
etcdto include the new pod and its current status.
The kubelet component of the worker node receives the instructions to run the new pod from the API server. It communicates with the container runtime to create the container for the pod, based on the specifications provided in the manifest file
After the container is created, the kubelet communicates with the kube-proxy to set up a network routing rules and load balancing for the pod, so it can communicate with other pods and services inside the cluster
Let’s look into some of the core features that Kubernetes offer:
- Self-healing: Kubernetes can automatically restart or replace containers that fail to run properly
- Automated Rollouts & Rollbacks: The current deployment status can be changed at a speed that suits the desired deployment state
- Service Discovery & Load Balancing: Kubernetes can expose containers within the cluster to the outside world using DNS names or their own IP addresses. For services with high network traffic loads, network traffic can be load balanced by deploying them to ensure stable service operation
- Storage Orchestration: Kubernetes can link local storage servers and storage services provided by public cloud providers for use. External storage server resources can be easily used, and data persistence can be ensured
- Secret and Configuration Management: Kubernetes can safely store and manage important information such as passwords, SSH keys, and OAuth tokens. When the container configuration information is changed, Kubernetes can deploy and update it by reflecting the changes in the configuration information without reconstructing the container image
- Automatic Bin Packing: Kubernetes receives resource utilization required for container operation and provides the most appropriate cluster node for container operation.
Today we looked at Kubernetes, the most popular container orchestration tool today. We also looked at what components are present inside the Kubernetes cluster and how they work. Finally, we discussed important features that Kubernetes bring to the table. To check out how to set up a working Kubernetes cluster, without using any managed cloud services(often referred to "Vanilla" Kubernetes), follow this link