DEV Community

Cover image for What is kubernetes?
Sheina_techiue
Sheina_techiue

Posted on

What is kubernetes?

** So What is Kubernetes?**

Modern software is increasingly run as fleets of containers, sometimes called microservices. A complete application may comprise many containers, all needing to work together in specific ways. Kubernetes is software that turns a collection of physical or virtual hosts (servers) into a platform that:

Hosts containerized workloads, providing them with compute, storage, and network resources, and
Automatically manages large numbers of containerized applications — keeping them healthy and available by adapting to changes and challenges
Kubernetes is a powerful open-source orchestration tool, designed to help you manage microservices and containerized applications across a distributed cluster of computing nodes. Kubernetes aims to hide the complexity of managing containers through the use of several key capabilities, such as REST APIs and declarative templates that can manage the entire lifecycle.

How does Kubernetes work?
When developers create a multi-container application, they plan out how all the parts fit and work together, how many of each component should run, and roughly what should happen when challenges (e.g., lots of users logging in at once) are encountered.
They store their containerized application components in a container registry (local or remote) and capture this thinking in one or several text files comprising configuration. To start the application, they “apply” the configuration to Kubernetes.
Kubernetes job is to evaluate and implement this configuration and maintain it until told otherwise. It:
Analyzes the configuration, aligning its requirements with those of all the other application configurations running on the system
Finds resources appropriate for running the new containers (e.g., some containers might need resources like GPUs that aren’t present on every host)
Grabs container images from the registry, starts up the new containers, and helps them connect to one another and to system resources (e.g., persistent storage), so the application works as a whole

  1. Then Kubernetes monitors everything, and when real events diverge from desired states, Kubernetes tries to fix things and adapt. For example, if a container crashes, Kubernetes restarts it. If an underlying server fails, Kubernetes finds resources elsewhere to run the containers that node was hosting. If traffic to an application suddenly spikes, Kubernetes can scale out containers to handle the additional load, in conformance to rules and limits stated in the configuration. ** Why use Kubernetes?** One of the benefits of Kubernetes is that it makes building and running complex applications much simpler. Here’s a handful of the many Kubernetes features:

Standard services like local DNS and basic load-balancing that most applications need, and are easy to use.
Standard behaviors (e.g., restart this container if it dies) that are easy to invoke, and do most of the work of keeping applications running, available, and performant.
A standard set of abstract “objects” (called things like “pods,” “replicasets,” and “deployments”) that wrap around containers and make it easy to build configurations around collections of containers.
A standard API that applications can call to easily enable more sophisticated behaviors, making it much easier to create applications that manage other applications.
The simple answer to “what is Kubernetes used for” is that it saves developers and operators a great deal of time and effort, and lets them focus on building features for their applications, instead of figuring out and implementing ways to keep their applications running well, at scale.

By keeping applications running despite challenges (e.g., failed servers, crashed containers, traffic spikes, etc.) Kubernetes also reduces business impacts, reduces the need for fire drills to bring broken applications back online, and protects against other liabilities, like the costs of failing to comply with Service Level Agreements (SLAs).

Where can I run Kubernetes?
Kubernetes also runs almost anywhere, on a wide range of Linux operating systems (worker nodes can also run on Windows Server). A single Kubernetes cluster can span hundreds of bare-metal or virtual machines in a datacenter, private, or any public cloud. Kubernetes can also run on developer desktops, edge servers, microservers like Raspberry Pis, or very small mobile and IoT devices and appliances.

With some forethought (and the right product and architectural choices) Kubernetes can even provide a functionally-consistent platform across all these infrastructures. This means that applications and configurations composed and initially tested on a desktop Kubernetes can move seamlessly and quickly to more-formal testing, large-scale production, edge, or IoT deployments. In principle, this means that enterprises and organizations can build “hybrid” and “multi-clouds” across a range of platforms, quickly and economically solving capacity problems without lock-in.

What is a Kubernetes cluster?
The K8s architecture is straightforward. Only the control plane, which exposes an API and is in charge of scheduling and replicating groups of containers known as Pods, interacts directly with the nodes hosting your application. Kubectl is a command-line interface for interacting with the API to exchange desired application states or obtain detailed information on the present state of the infrastructure.

1.Kubernetes Control Plane Components
Below are the main components found on the control plane node:

etcd server

A simple, distributed key-value store which is used to store the Kubernetes cluster data (such as the number of pods, their state, namespace, etc.), API objects, and service discovery details. It should only be accessible from the API server for security reasons. etcd enables notifications to the cluster about configuration changes with the help of watchers. Notifications are API requests on each etcd cluster node to trigger the update of information in the node’s storage.

kube-apiserver

The Kubernetes API server is the central management entity that receives all REST requests for modifications (to pods, services, replication sets/controllers, and others), serving as a frontend to the cluster. Also, this is the only component that communicates with the etcd cluster, making sure data is stored in etcd and is in agreement with the service details of the deployed pods.

kube-controller-manager

Runs a number of distinct controller processes in the background (for example, the replication controller controls the number of replicas in a pod; the endpoints controller populates endpoint objects like services and pods) to regulate the shared state of the cluster and perform routine tasks.

When a change in a service configuration occurs (for example, replacing the image from which the pods are running or changing parameters in the configuration YAML file), the controller spots the change and starts working towards the new desired state.
_
Cloud-controller-manage_r

Responsible for managing controller processes with dependencies on the underlying cloud provider (if applicable). For example, when a controller needs to check if a node was terminated or set up routes, load balancers or volumes in the cloud infrastructure, all that is handled by the cloud-controller-manager.

kube-scheduler

Helps schedule the pods (a co-located group of containers inside which our application processes are running) on the various nodes based on resource utilization. It reads the service’s operational requirements and schedules it on the best fit node.

For example, if the application needs 1GB of memory and 2 CPU cores, then the pods for that application will be scheduled on a node with at least those resources. The scheduler runs each time there is a need to schedule pods. The scheduler must know the total resources available as well as resources allocated to existing workloads on each node.

kubectl
kubectl is a command-line tool that interacts with kube-apiserver and sends commands to the master node. Each command is converted into an API call.

2.Kubernetes Nodes.

A node is a Kubernetes worker machine managed by the control plane, which can run one or more pods. The Kubernetes control plane automatically handles the scheduling of pods between nodes in the cluster. Automatic scheduling in the control plane takes into account the resources available on each node, and other constraints, such as affinity and taints, which define the desired running environment for different types of pods.

Below are the main components found on a Kubernetes worker node:

kubelet — the main service on a node, which manages the container runtime (such as containerd or CRI-O). The kubelet regularly takes in new or modified pod specifications (primarily through the kube-apiserver) and ensures that pods and their containers are healthy and running in the desired state. This component also reports to the master on the health of the host where it is running.
_kube-proxy _— a proxy service that runs on each worker node to deal with individual host subnetting and expose services to the external world. It performs request forwarding to the correct pods/containers across the various isolated networks in a cluster.
3.Kubernetes Pods

A pod is the smallest unit of management in a Kubernetes cluster. It represents one or more containers that constitute a functional component of an application. Pods encapsulate containers, storage resources, unique network IDs, and other configurations defining how containers should run.

Docker is not really a Kubernetes alternative, but newcomers to the space often ask what is the difference between them. The primary difference is that Docker is a container runtime, while Kubernetes is a platform for running and managing containers across multiple container runtimes.

Docker is one of many container runtimes supported by Kubernetes. You can think of Kubernetes as an “operating system” and Docker containers as one type of application that can run on the operating system.

Docker is hugely popular and was a major driver for the adoption of containerized architecture. Docker solved the classic “works on my computer” problem, and is extremely useful for developers, but is not sufficient to manage large-scale containerized applications.

If you need to handle the deployment of a large number of containers, networking, security, and resource provisioning become important concerns. Standalone Docker was not designed to address these concerns, and this is where Kubernetes comes in.

The primary strength of Kubernetes is its modularity and generality. Nearly every kind of application that you might want to deploy can fit within Kubernetes, and no matter what kind of adjustments or tuning you need to make to your system, they’re generally possible.

Top comments (0)