DEV Community

archi-jain
archi-jain

Posted on

Kubernetes: All you need to know to get started(part:1)

What Is Kubernetes? Kubernetes Benefits and Operating Principles

Let's start our discussion with answering the question, what is Kubernetes? At its core, Kubernetes is a container orchestrator. What this means is it's Kubernetes job to start and stop container based applications based on the requirements of the systems administrator or developer. Now, one of the key facets of Kubernetes is workload placement. If I need to deploy a container based application into a cluster, how do I deploy it? On which servers does it physically live on? Does it need to be co resident with other services or containers inside of the cluster?

If that's the case, we can define that in our code that describes our application, and Kubernetes can manage that for us. Now, speaking of Kubernetes managing things for us, Kubernetes also provides an infrastructure abstraction. As a developer, if I need to deploy an application into production, I really don't want to have to care about which server it lives on or have to go configure a load balancer to send traffic to my application. That's all going to be handled for me under the hood by Kubernetes. One of the other core ideas behind Kubernetes is this concept called desired state. We can define what our applications or our services look like in code, and it's the job of Kubernetes to make sure that our system meets that defined desired state. And so perhaps our application or our system is composed of some collection of web application containers and database containers, maybe some middleware or even a caching tier. We can write the code that describes what our system looks like, hand it off to Kubernetes for deployment, and it's Kubernetes job to make that happen, to make sure that our system is in that defined desired state. Let's look at some of the key benefits of using Kubernetes, and first off is speed of deployment. Kubernetes gives us the ability to deploy container based applications very, very quickly. And what this enables us actually to do is to get code from a developer's workstation into production fast, and that gives us the ability to absorb change quickly. The speed of deployment really allows you to iterate quickly and get new versions of code out, enabling new key capabilities for your organization's applications. Next up is Kubernetes' ability to recover very quickly. When we define our system and code and we define that desired state, so perhaps a collection of web app containers, and something causes our system to no longer be in that desired state, so perhaps a container crashed or a server failed, well, it's Kubernetes' job to ensure that our application comes back to that defined desired state by deploying new containers supporting our applications, making sure that we have a collection of web app containers up or really whatever resource it is that defines our application or our system's desired state. And then finally, Kubernetes allows us to hide infrastructure complexity in the cluster. And so things like storage, networking configuration, and workload placement are core functions of Kubernetes, and developers don't have to worry about these things when deploying container based applications into Kubernetes.

So , and now that we know what Kubernetes is and the benefits of Kubernetes, let's look at some of the basic operating principles behind Kubernetes. And first up is desired state or declarative configuration. This is where we define our application, or really, our deployment's state and code. We define what we want to deploy, and Kubernetes makes that happen for us. It'll go and pull the specified container images and starts them up and possibly even allocates load balancers and public IPs if that's what's defined in our deployment's code. We write the code describing the deployment. Kubernetes does the work to bring it online in the desired state. Next, controllers or control loops have the responsibility of constantly monitoring the running state of the system to make sure that the system is in that desired state. And if it's not, a controller will try to bring the system into the desired state. And so, for example, if we've defined that we want three web app containers online, it's Kubernetes job, more specifically, a controller's job, to ensure that three web app containers are online. A controller will start the three replicas up, and later, if one of those fails or goes offline, a controller will create a new web app container, replacing the one that failed. Now there are many different types of controllers available in Kubernetes for various scenarios, and we'll cover them throughout this series of courses. But the key concept here is controllers are what make changes to the system to ensure the desired state. Another core principle is the Kubernetes API. The Kubernetes API provides a collection of objects that we can use to build and define the systems that we want to deploy in code. The objects defined in our code define the desired state of our applications or the systems that we want to deploy in Kubernetes.

The Kubernetes' API is implemented and available via the API Server. The API Server is the central communication hub for information in a Kubernetes cluster. This is where we, as administrators and developers, interact with Kubernetes to deploy and manage workloads. And that's also where the components of a Kubernetes cluster interact with each other to understand the current state of the system and to make changes to that state, if needed to ensure the desired state.

Top comments (0)