DEV Community

Cover image for Why You Should Learn Kubernetes

Why You Should Learn Kubernetes

Introduction

Kubernetes is one of the fastest-growing open-source projects in the community since it was released by Google in 2014. It became one of the de facto APIs for building and deploying cloud-native applications in the public cloud.

As an increasing number of companies shift to cloud-native applications to modernize their systems into smaller, microservice-based components, they require a robust technology that can manage scalability, reliability, and automated deployment. This enables them to operate with greater agility as the demand for their application increases.

What is Kubernetes?

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation.

Let’s take a look at why Kubernetes is so useful today

In IT, there are three common ways to deploy our applications (to name a few before the advent of Function as a Service known as FaaS)

traditional deployment


Figure: 1.0 "Evolution of application deployment"

Traditional deployment

In the early days of computing, organizations ran their applications on their physical servers (on-premise setup). By doing this, you cannot define resource boundaries between your application in a physical server, which might cause resource allocation issues.

For example, suppose you have multiple applications running on the same physical server. In that case, there can be instances where one application would take up most of the resources and as a result, the other applications would underperform (see Figure: 1.1). A solution for this is to host your other applications on a different physical server. But the problem with this approach is that it might not scale, because you’ll end up buying more and more physical servers to accommodate the load when increases. When you allocate your application to different servers, it often results in underutilized resources on the physical server. This is where the virtualized deployment era comes in.

traditional deployment


Figure: 1.1 "No resource boundaries between application"

Virtualized deployment

As a solution to traditional deployment, Virtualized deployments were introduced. It allows us to isolate different applications to separate Virtual Machines (VMs). In such a way that other applications won’t take up more resources even if they're on a single server (see Figure: 1.2). Multiple Virtual Machines can run on a single physical server which reduces our hardware cost and allows us to maintain lesser physical servers compared to traditional deployment approach.

But here’s the thing, by using Virtualized Deployment, there are still underutilized resources. Each VM has its own Operating System which can take up resources on the physical server. Would it be better if we remove the layer of the Operating System so that we could have more storage in the physical server to use? This is where the containerized deployment is introduced.

virtualized deployment


Figure: 1.2 "Virtualized Deployment to define resource boundaries"

Container deployment

Container deployments are similar to Virtualized deployments they can isolate your applications from the rest of the applications in a “Container” fashion. Containers are lightweight (see Figure: 1.3) compared to VMs because containers, don’t have the whole Operating System layer. Similar to VMs, containers have their own filesystem, memory, and CPU but container uses a generic Operating System called Kernel. This Kernel Operating System is typically Linux.

container deployment


Figure: 1.3 "Container has its own filesystem"

In order for us to run “Containers” we need to have a Container Runtime on top of our physical server which is typically Docker. (Docker is a container technology. There are lots of container technologies exist in the market and not only Docker see link)

Since containers are lightweight, we can easily add & remove applications without the boot-up of the server.

This is where the Microservices pattern became popular. With Microservices (see Figure: 1.4), we can host each microservice to each individual container. We can have hundreds or even thousands of running containers on a single physical server. Just imagine that 🤯.

container deployment


Figure: 1.4 "Applications are decomposed into smaller services or known as Microservices"

To name a few, containers have lots of benefits. With containers, we can be more agile, scale our application on demand, and achieve portability in our applications which we will not end up having dependency issues when deploying applications in production. Because the container is packaged to include all the application dependencies in order for it to run.

But that great power comes with great responsibility 💪🏻

How do we manage our containers? I mentioned earlier that containers enable us to deploy hundreds or even thousands of containers on a physical server. How do we start, and stop each individual container? How do we ensure each individual container is healthy? How do coordinate each individual container?  How do you handle the scaling of containers, Deployment, Load Balancing, Service Discovery, and even Rolling Updates? These are things that you need to keep in mind in running containers in production.

Of course, we can develop a script that can handle these tasks but that’s not ideal unless you want to invent a new technology. For some companies it’s not worth it for them to develop new technology, instead, they want a proven and working technology in the market that is capable of doing this instead of developing their own. This is where Container Orchestration comes in.

What is Container Orchestration?

Container Orchestration is a tool that orchestrates/manages the containers. It helps us solve the problem that arises in managing containers. Container orchestration helps us manage each individual piece of a container such as scaling, automated deployments, and health checks. It also ensures each container is alive even in uncertain circumstances.

Container orchestration is the same concept of an orchestra in music which manages each individual part of the instrument to perform great music (see Figure: 1.5)

container orchestration


Figure: 1.5 "Container Orchestration is like an Orchestra in Music"

There are lots of tools that are available for choosing container orchestration, but Kubernetes is one of the most popular.

(Docker Swarm can manage containers but docker swarm doesn’t scale well and has limited features compared to Kubernetes. see link: Docker Swarm vs. Kubernetes)

Benefits of Learning Kubernetes?

Industry demand

Big tech companies like Google, Amazon & Microsoft are highly invested in this technology, which creates a huge job opportunity for engineers who will learn this.

Also, with more and more companies embracing the Kubernetes model to deploy their cloud-native applications, it creates huge community support on the market and allows you to become more productive in using the technology.

In a recent 2024 Stackoverflow Developer Survey, Kubernetes ranked among the top 5 most used tools by Professional Developers.

StackOverflow survey


Figure: 1.5 "Kubernetes became top 5 by most used by developers"

Portability

One of the best selling points of Kubernetes is its use of containers to run applications (Thanks to containerization tools). Since containers are portable, you can run your applications in a wide variety of environments, along with their required dependencies. With this kind of approach, it is possible to reduce the number of issues shown in Production because of missing configuration.

Gone are the days of saying...

“Your application is running in DEV but not in PROD. That's because there’s is a different configuration in the environment.“

Scalability & Flexibility

One of the common issues that we face on On-Premise setups is how do we scale our applications like Horizontal scaling. Horizontal Scaling in an On-Premise setup is a nightmare for some because we typically buy a new physical server and set up the configuration required in order to spin up a new one. This kind of approach is not very productive and you’ll waste a lot of time.

With Kubernetes, you can easily scale up and scale down your applications depending on the demand. Whether it is Horizontal or Vertical Scaling (see: Horinzontal vs. Vertical Scaling).

It is flexible because it can adapt to your needs in such a way that if the demand is low, it can scale down. But if the demand is high, it scales up your resources.

High Availability & Stability

Kubernetes by nature is self-healing and monitors our applications whether it was healthy or not. If it’s not healthy, it replaces a new instance to become functional and healthy again (Figure: 1.0). With this kind of feature, we can rely on technology to ensure our application is reliable in uncertain circumstances.

Self-healing


Figure: 1.6 "If the Container isn't healthy, it replaces a new one to become healthy again"

You can also perform rollbacks if your recent deployments didn’t work as expected and Kubernetes can seamlessly perform rollbacks.

Community & Support

Cloud Providers such as AWS, Azure, & GCP have created Managed Kubernetes Services that allow customers to use Kubernetes with ease without worrying too much about its underlying infrastructure. This is a huge advantage for the community who wants to use and try Kubernetes immediately 👏🏻.

There are also lots of tools developed by the community that you can use in Kubernetes to make your setup more versatile like integrating it into CI/CD Pipeline or following the GitOps approach. Tools like ArgoCD, Kustomize, and Helm can make you more productive by embracing the Kubernetes model in your deployment.

Where should you start?

Start with the basics. I will share below the resources that helped me get started on my journey of learning Kubernetes.

  1. First, start getting better and understanding how containers work. Start using a container technology like Docker and containerizing your application.

    Familiarize yourself with how to create Dockerfile from scratch and learning the important Dockerfile commands like FROM, COPY, ENTRYPOINT, and CMD might help.

    Get your hands dirty on how to use docker cli commands. Commands like docker build, docker tag, and docker push.
    Be familiar with how to troubleshoot docker containers by using docker logs and dig deeper with the container image like docker exec to inspect the container filesystem.

  2. Once you know containers, Learn how to work with YAML. You’ll get heavily reliant on YAML once you start working on Kubernetes. A basic understanding of how YAML files work is a huge advantage in learning Kubernetes.

  3. Next, set up a local Kubernetes cluster on your local machine. The Docker desktop application can help you easily get started on this by enabling Kubernetes.

    Deploy lots of applications and get your hands dirty. Understanding
    Kubernetes objects like Pods, Deployments, and Services are a good start to get your applications up and running.

    You’ll heavily use the Kubernetes CLI commands like kubectl -f <filename.yaml>, kubectl get pods, kubectl describe, & kubectl exec. To interact with applications.

  4. Apply what you learn out there. Maybe in your team or organization, introduce some container orchestration tools like Kubernetes if you haven’t used them yet. Or if you’re using it, collaborate with those people who heavily use Kubernetes on their day-to-day job and start volunteering to help them with that task.

  5. This is optional but, Get certified! Having a certification in a particular technology can enhance professional credibility and recognition within the industry. It’s also a huge plus for those people who don’t have certification. This also serves as a personal accomplishment in your journey as a Software Engineer and can help you stay up to date with the latest trends in technology.

    Certifications like CKAD, CKA, and CKS might be great option to validate and show your credibility and competitiveness in the Kubernetes field.

    This certification was released by the Cloud Native Computing Foundation. This Certification has a cost and you will need to review and practice a lot to pass it.

    I just got my certification in CKAD last year and it was one of the best decisions that I have made.

    Having a certification is a win-win scenario for Software Engineers!

Call to Action

Ready to dive into the Kubernetes world? Start your learning journey today with these beginner-friendly resources and courses.

Let me end this blog post with a quote from one of the famous Kubernetes Advocate.

“Kubernetes has become the de facto standard for container orchestration. Its ability to abstract the underlying infrastructure makes it easier for developers to focus on writing code rather than managing infrastructure

— Brendan Burns, Co-Founder of Kubernetes and Distinguished Engineer at Microsoft.

If you enjoy this blog, please give me a heart reaction ♥️. Comment below for some additional tips as well 😊

“Don’t be a mediocre Software Engineer”

Follow me on twitter https://twitter.com/llaudevc/

Top comments (9)

Collapse
 
ancrohi profile image
Anchal Rohit

Very well written. Kubernetes definitely is becoming the way for all deployments. Though it does have it's learning curve if you bring it in at a later stage once you are already working with a set of microservices deployed in a conventional way. And the managed services do get a bit expensive.

Collapse
 
vince_llauderes_47017f0e7 profile image
Vince Llauderes

Indeed! 💯 Kubernetes All the way

Collapse
 
vince_llauderes_47017f0e7 profile image
Vince Llauderes

Yes I agree 🙂. Managed Kubernetes Servicrs is a bit expensive when not used properly. You need to maximize the Kubernetes environment like having multi-tenancy design in hosting your system.

Collapse
 
mrks_089 profile image
Markus • Edited

One question still present, why would one opt for kubernetes and not other cloud services like AWS using lambdas for creating microservice architectures?

Collapse
 
kayea profile image
Kaye Alvarado

K8s ftw! 💯

Collapse
 
apple_pangantihon_06d1232 profile image
Apple Pangantihon

Vincent Llauderes, your work is commendable. Although I'm not an IT, you communicate the topic clearly and simply. Keep writing and sharing your knowledge to inspire a wider audience.

Collapse
 
pedrosoaresll profile image
Pedro Oliveira

Guys, looking for better pricing, what do you prefer, dedicated machines to make container deployments or serverless deployments?

Collapse
 
vince_llauderes_47017f0e7 profile image
Vince Llauderes

Hi, Pedro. When in comes to pricing it would be better to use serverless deployments so that you wont pay up front for the cost of setting up a new dedicated machine. Like man power, cooling system etc.

With serverless, you’ll able to utilise the cloud benefits such as the pay-as-you go model and not having a headche in maintenance of the server.

Collapse
 
johnofgod33 profile image
jean de dieu

Very well thanks !