DEV Community

Marceli-Wac
Marceli-Wac

Posted on • Edited on

Kubernetes - Explained Like You're Five

Kubernetes - by definition is a container orchestration tool.

That's really all there is to it, so let's break it down! Before that however I'm going to give a Docker quick overview, mainly because it's a related topic and will be beneficial for those who wanted to get into Kubernetes but never really learnt docker.

As a disclaimer, this post is written as a reply to the question by Hassan Sani. It is also not the most comprehensive explanation, but rather a quick description for those who wish a 5-year-old-explanation. The original question asked:

What is Docker?

Docker is a piece of software that fundamentally "packages" applications into runnable programs. Of course there's a little bit more to it than that, but let's assume that Docker does just this. As an example, think of an application you can write in your favourite language. That application serves some purpose, and you as a developer probably have the knowledge on how to get from having that source code to running it as a program. This can be done by the means of compilation (i.e. C++, Java) or interpretation (i.e. JavaScript, PHP) etc.. Regardless of the language specifics, there is a connection between the raw code and its product in action.

Now that you have your program, you would probably like to spread the love and allow others to benefit from the long hours you've put into its development. So let's say you want to share that great To-do app you've just written with your friend. Maybe he has the experience to build it from code, maybe not. Either way, this might not even matter because if he uses different operating system (OS), chances are you are using libraries that are installed locally on your system so the code won't work on his system anyway.

That's where Docker comes in. Remember when I said it takes your program in its source-code form and transforms it into a runnable?

But my friend can't run my todo.exe on his Mac!

I know, I know! With Docker, running any application in any OS is possible because "packages" created by Docker are not just executables; instead they are images of an entire OS with a pre-loaded application.

That seems like a lot of unnecessary stuff coming with my application!

It might, but it's not the whole story! Nowadays packages like alpine are widely available linux distributions that only take 5MB.

To use it, all you have to do is follow few simple steps (see a decent guide below) and transform your application code into a docker image. Share it with your friend and now all he has to do is find himself a docker runtime to run it! Easy peasy!

but...

Alright, so you're saying that your friend does not want to download extra packages just to try out your To-do app. Fair enough, I've got you covered. It turns out, that Kubernetes is exactly what can help you achieve that!

The power of Kubernetes

Let's recap quickly where we are. You have a To-do application that you've graciously written to help your friends get through their day and because they use some fancy OS that you don't have access to, you've packaged your application into a docker image. Now all they have to do is run the docker image and profit. Except they can't do that, because all that docker-engine mumbo jumbo seems like a lot of effort and they don't want to clutter their OS with unnecessary packages.

Okay. As I've said, Kubernetes has got you covered. Imagine, that the To-do application you have written can run as a standalone web application. All this means is that you can start it somewhere on a server and make it available at some endpoint (I'm looking at you, localhost:3000). Maybe you are a CLI-ninja who enjoys setting up AWS EC2 servers, security groups, load balancers etc., or maybe all you want to do is expose that docker image you've just built without all that faff surrounding setup.

I'll quote again - "Kubernetes is a container orchestration software"; and we're finally at the stage where we get to demystify that orchestration part of it. Running a publicly available software of any kind, size and shape, always comes with several considerations. These might include scaling (responding to the increased use, or load of your application), delivering (exposing your service to other users) or securing (restricting access to or otherwise protecting) your application, but also covering more pragmatic scenarios. What happens when your application crashes? Would it restart itself? What about existing session? And what if you've made a mistake and there's a gigantic memory leak that now prevents you from even accessing the server?

Kubernetes deals with all of these questions and considerations in an elegant manner. It orchestrates your containers, and it does that by tying your packaged software to the infrastructure in the same way Docker ties your code into its actionable form - the runnable software.

I'm not going into the great depth in explaining the concepts and structure of Kubernetes, because there are loads and loads of great resources that will do it better than I can, but the bottom-line is, you should RTFM (which you will find here). I'll also attach one of the Dev.to tutorial series I've found to be both helpful and reasonably in-depth.

The things you should know

  1. Kubernetes operates on a cluster. This means that there is usually more than a single (virtual) machine; this provides redundancy and scalability options but supports the core kubernetes services ( kube-apiserver, kube-controller-manager and kube-scheduler). There is of course more services you can use to tweak your cluster to do what you like, such as for example the KubeDNS - the DNS service responsible for routing within the cluster.
  2. Cluster comprises of nodes. The least you can have is a single master node. Master here is the keyword since Kubernetes distinguishes between the two types of nodes: masters and non-masters. The bottomline of the difference between the two is that master-nodes govern the behaviour of "what goes where" within the cluster. This means that if you deploy (I will get to this) you application on the cluster, Kubernetes master will decide what actual resources should be provisioned and what objects and structures should be created and where. The non-master nodes serve as an expansion to the architecture but do not include the services such as kube-apiserver (instead they run kubelet and kube-proxy that allow them to "listen" to the master nodes).
  3. kube-apiserver is a service that provides an interface to the cluster. This means that to deploy your application, you provide the cluster with a configuration file that describes exactly what you want. For example, you might want to specify what docker image you would like to run, which ports should be open for communication, under what domain will it be available and how many replicas (copies of your application running in parallel) you would like. The complete list of possible configuration options is extensive and there are multiple resource types you can use to make the service available. Each with their benefits, drawbacks and quirks.
  4. Most commonly, the kubectl CLI utility is used to communicate with the cluster. Yes, it turns out you will need some of these CLI-ninja skills after all, but on a positive note - it's not complicated at all and the configuration files have multiple supported formats including JSON and YAML, so the process can be really painless.

There is a more complete list of concepts available on the Kubernetes website. I suggest you give them a quick look if you want to get a better grasp of who does what and what goes where.

Should I install it on my machine or go for the cloud?

Finally, before you dive down into installing the Kubernetes clusters on your bare-metal servers, look for the tools that will help you with this task. In my days, I have found kops (kuberentes operations) to be an extremely useful software. By far, I can credit the best introduction to Kubernetes on AWS I have seen to the following YouTube series by Jeffrey Taylor.

Top comments (6)

Collapse
 
mtoto_lekgwathi profile image
Mapogo Lekgwathi

Yeah but what if my todo.exe isn't a Web application?

Collapse
 
fernandino143 profile image
Fernandino Silva • Edited

It's not uncommon that people that want to use k8s don't actually NEED it - k8s is just the latest trend and some people use it just because of that. In your case you might think of any other orchestrators like Nomad by Hashicorp. The article is very very good though.

Collapse
 
marceliwac profile image
Marceli-Wac

The above article gave an example of a To-do app for the simplicity of explanation.

I would imagine that any workload run on Kubernetes adheres at least in some way to a "connected" service architecture. While this could be something as simple as a program that takes some input X and produces output Y, using Kubernetes at the very least might provide you with the benefit of horizontal scalability.

As mentioned in one of the replies by Fernandino Silva, not every workload requires Kubernets. Running a standalone desktop application probably wouldn't need a whole cluster behind itself. It's back-end infrastructure on the other hand could very well be deployed via Kubernetes, especially in the case where the front-end app is just one of many clients.

Collapse
 
tolambiasapna profile image
Sapna T

Thanks for the draw attention towards Kubernetes.
I hope you might also like the great blog on Kubernetes awesome benefit in business --
Read & share: addwebsolution.com/blog/how-kubern...

Collapse
 
lishine profile image
Pavel Ravits

I understand that currently tools are built that come as a layer on top of kubernetes, or am I wrong and you must to do this kind of deploy with kub knowledge?

Collapse
 
marceliwac profile image
Marceli-Wac

There definitely are tools to manage Kubernetes clusters (Rancher being one of them). Another important service that you could look into is also the Web-UI (also known as Dashboard, see GitHub repository), which runs on the Kubernetes cluster itself and provides a nice web-based UI for monitoring and managing the k8s cluster.

On top of that, there are other services that work similarly to Kubernetes such as AWS ECS (Elastic Container Service) which allow you to deploy containerised applications without the need to configure clusters or servers they would run on. Essentially, ECS abstracts the whole infrastructure layer away, but as we all know this has several tradeoffs. Personally, I've found it significantly cheaper to run an entire cluster on AWS using kops as opposed to ECS, but for me the extra complexity of configuring the clusters was not a problem. For those who wish to just deploy their application, this might be a very viable option.