DEV Community

Michael Levan
Michael Levan

Posted on

The Future Of Kubernetes

Understanding what will happen to a technology once you learn it, live it, and use it daily pops up a lot in a persons mind. The constant questions of “will this technology stick around?” or “when should I start learning the new thing?” comes into our mind and throw us for a loop.

With Kubernetes, the question of “is this technology going to stick around” doesn’t come up all that much, and for good reason.

In this blog post, you’ll learn about what I believe the future of Kubernetes is and how you should think about it for the next several years.

Kubernetes Will Be Irrelevant

When thinking about the direction that a platform can go, I like to think about it with the end in mind. As we all know in technology, there’s a beginning, a middle, and an end.

As for the beginning of Kubernetes, I would say that we’re still in it. Organizations are still trying to adopt Kubernetes, and once adopted, engineers are still trying to make it work in the way that they’re hoping for in terms of how they need to run workloads.

The middle of Kubernetes will be like all technology middle grounds. It’ll be adopted, wildly used in production, and just as much of a day-to-day operation as using the UI of a public cloud. Most of the kinks will be worked out and adoption rates will be steady, along with success rates.

At the end, Kubernetes won’t be the end of orchestration. It’ll just be another platform that was replaced by something else. Think about it like this - Managed Kubernetes Services like Azure Kubernetes Service and AWS EKS already have the tools to put any label on Kubernetes that they want. With Azure Container Apps and EKS Fargate, they could replace “Kubernetes” and call it something else. How? Because at the end of the day, an orchestration platform is an orchestration platform. It doesn’t matter what name it has. The beginning of the end already exists. All that’s needed is the name of the platform and a scalable approach.

I do believe that “the end of Kubernetes” will take a little bit longer than typical technology. The reason why is because just as much as Kubernetes is popular, it’s not the easiest platform to manage and deploy. In the past year or so is when products/platforms around making Kubernetes easier started coming out, and Kubernetes hit the shelves for users in 2015, so as you can see there’s a bit of a gap there. It almost feels like Kubernetes just came out, but at the same time, it’s been out for a while.

I decided to write this section first because thinking with the end in mind is crucial to understand how the rest of the future of Kubernetes, before the end, will play out. It’s also crucial to understand what you’ll need before the end of the Kubernetes.

The platform may become irrelevant, but the doors that the platform opened up for us won’t be. The key takeaway for Kubernetes in the future will be that Custom Resource Definitions (CRD) and extending the API will be “the future” of orchestration platforms.

Environment Instead Of Platform

As of right now, Kubernetes is thought of this mystery box. It’s a platform that engineers know, based on their experience level, sort of what’s happening inside of it, but not really. Many engineers take to the cloud and deploy Kubernetes workloads without understanding what’s happening inside of, for example, the control plane because they don’t have to manage it.

They don’t know about the underlying components, which can in-turn turn into a negative experience later on, which we’ll talk about in the upcoming sections.

Because of that negative experience, Kubernetes will be thought about less like a platform and more like a datacenter. It’ll be thought of as “the datacenter of the cloud”.

With all of it’s moving parts, security needs, networking needs, and deployment needs, environments will be built out around the successful deployment of Kubernetes and it’s workloads.

What’s Happening Underneath The Hood

When it comes to Kubernetes, there’s a lot of abstraction. If you think about Managed Kubernetes Services like AKS and GKE, you don’t have to worry about the control plane. That means you don’t have to think about:

  • Scaling the control plane
  • The API server
  • Scheduler
  • Controller Manager
  • Etcd

Which are arguably the most important parts of Kubernetes.

With Managed Kubernetes Services, you don’t even have the ability to properly know how all of the components work.

The question that I constantly ask myself is “how does someone troubleshoot if something is going wrong?”. For example, let’s say you're having a problem with your Kubernetes cluster and it ends up being that the Container Network Interface (CNI) is having issues on the control plane, and therefore, Kubernetes Worker Nodes aren’t able to connect.

If you don’t know how the CNI works and that worker nodes and other control planes won’t connect to the primary control plane without CNI working, how can you troubleshoot the problem? How could you know if it’s something you did or something that the cloud provider did?

Understanding what’s going on underneath the hood is a crucial need for any engineer working with Kubernetes. Otherwise, you’re literally flying blind.

Hybrid Solutions

At the time of writing this, there are four primarily hybrid solutions:

  • Azure Stack HCI
  • Google Anthos
  • AWS Outposts
  • EKS Anywhere

They’re all more or less doing the same thing at a high level - giving engineers the ability to run on-prem workloads like they are on the cloud.

You may be thinking to yourself “why wouldn’t I just run the workloads in the cloud?” and the answer to that question can be one in a million. Some common scenarios are organizations have legacy apps that they aren’t ready to move to the cloud yet, latency concerns, security concerns, the lift and shift would be too great for the reward, or they simply want a combination of both on-prem and in the cloud (the hybrid model).

The next question becomes “why would they use these services then?” and I feel that the answer to the question is the same as why people want to use OpenStack. It’s like having a cloud, but on-prem.

For example, let’s say you already have containerized applications that are running in the cloud. Perhaps you already know EKS or AKS, and you want to containerize legacy apps and orchestrate them, but you don’t want to run them in the cloud. You could run them on-prem, with Kubernetes, the same way you run containerized apps in the cloud. It’ll look and feel the same. The only difference is that it’ll be on-prem instead of in the cloud. Engineering teams would be interacting with the apps and the interface the same exact way.

There will organizations, at least for a very long time, that’ll always have on-prem workloads. But just because they’re on-prem doesn’t mean they can’t get the benefits of cloud-native.

Serverless Kubernetes

Before Managed Kubernetes Services like AKS, EKS, and GKE, engineers had to create, manage, and update control planes and worker nodes. With AKS, EKS, and GKE, control planes are now managed from a day-to-day perspective via the cloud provider. With those services, an engineer still has to worry about the worker nodes.

The next iteration of this would be that worker nodes aren’t managed from a day-to-day perspective by engineers either. That’s where Serverless Kubernetes comes into play.

There are a few services out there that do exactly this:

  • GKE AutoPilot
  • AWS Fargate
  • Azure Container Apps

“Serverless Kubernetes”, as in, removing the need to run day-to-day operations for the control planes AND worker nodes is as close as we are to not even needing the Kubernetes platform anymore. At this point, the only thing that engineers have to think about are are Controllers, Customer Resource Definitions, and the Kubernetes API itself.

With Serverless Kubernetes, engineers are truly only worrying about managing their infrastructure with an API.

However, there is a catch here. Even though you’re not performing the day-to-day operations of the underlying infrastructure, things can still go wrong and you should still know about it. If the Kubernetes API, scheduler, or other components in the Kubernetes cluster are degrading for whatever reason, you need to know for troubleshooting purposes.

If you aren’t managing it, it’s not like there’s anything you can do about it, but you can at least know so you aren’t going down a rabbit hole that’s out of your control and so you can let management know what’s happening.

It’ll still be a waiting game for it to be fixed, but at least you’ll have a root cause analysis for the organization.

As of right now, I believe Serverless Kubernetes is still too new and not used in production all that much, which makes it an overall future prediction as to how it’ll be used. I believe the primary concern for organizations will be scale and latency. If the environment is completely out of your hands, you’re relying on underlying automation of these services to scale for you. A few folks at Microsoft for example have said that Azure Container App is great for testing right now, but not for production as things like Namespace segregation doesn’t exist. If issues like that get fixed, Serverless Kubernetes might be a solid contender.

Creating Kubernetes Resources

When Infrastructure-as-Code and Configuration Management started to become more of a reality for engineering teams, there was never an idea around developers working with it unless they really had to. The idea was to keep it more for Sysadmins and infrastructure engineers that needed to create, update, manage, and delete environments.

Then, this idea started to change a bit. We began to saw tools like Pulumi and the AWS CDK emerge, which allow you to perform Infrastructure-as-Code, but with a general-purpose programming language like Go or Python.

Now, engineers can choose whatever language they want to create, update, manage, and delete environments.

I believe the same will occur for creating and managing Kubernetes resources.

As of right now, the primary method is with YAML. However, that’s far from the only option available.

When you’re interacting with Kubernetes, all you’re doing is “talking to” an API. Because of that, any language that can “talk to” any API can create resources in Kubernetes.

For example, here’s a link to some Go code that I wrote to create a Kubernetes Deployment and Service for an Nginx app.

The know-how to do this is already there, but not a lot of engineers are doing it. I believe this will become more relevant as people want to move away from YAML.

Dive Deeper

I’ve touched on this a bit throughout the blog post so far, but I’d like to call it out more directly.

There’s far too much abstraction in today’s world, and there was a different purpose for abstraction when it was originally designed for technlogy.

Automation and repeatability was supposed to be implemented after the knowledge of manual efforts and as a solution for those manual efforts.

Now, platforms, systems, and overall environments are automated and abstracted away without the engineer even truly understanding what’s happening inside of the environment.

Without the underlying knowledge, there’s no possible way for an engineer to properly troubleshoot, re-architect, or scale out environments.

I’ve heard time and time again “why do I need to know this if it’s abstracted away” or “why do I need to know the underlying components of Kubernetes if I’m not working with them”.

The reason why is because scaling out large environments, troubleshooting, and figuring out solutions to problems isn’t always going to be as easy as clicking a few buttons or running some Terraform code.

There’s an ongoing joke in today’s engineering world that if folks can’t find the answer on StackOverflow in 10 minutes, they say it cannot be done.

Engineers cannot continue to work in this fashion or it will destroy environments.

Kubernetes is no different. Engineers must dive as deep as they can into what’s quickly becoming the datacenter of the cloud, just like Sysadmins had to dive deep into system components.

Without truly going deep into a topic, you can never really know that topic at all.

If you’re curious about the prerequisites to Kubernetes and how to dive deep into them, check out this blog post.

Kubernetes Adoption and Incubators

For the past year or so, it seems like new Kubernetes projects, products, and incubators are popping up to solve certain problems in the Kubernetes landscape.

Some are focused on RBAC, others are focused on logging, some are focused on making Kubernetes easier, and everything in-between.

The CNCF has been working with a lot of these startups and it’s clear that the CNCF is putting a lot of time into Kubernetes in general, but also into the direction that Kubernetes is going in via these products.

At some point, once a lot more of these startups pop up, there will be a “tool belt” or some kind of “best practices for your environment” with the products that ended up making it out of the “start up” phase and into the enterprise phase. Perhaps this will be all under the umbrella of CNCF or of a “parent company” of sorts for all of the products.

Virtual Machines

This prediction is probably the most “hmm, I’m not sure about this” prediction. However, I do believe it makes sense because of legacy applications.

Think about HashiCorp Nomad - it’s an orchestrator and does the same thing as Kubernetes, except Kubernetes only supports containerized apps. Nomad supports any type of app.

For Kubernetes to stay competitive, this will have to be thought about and ultimately figured out.

The good news is, there are solutions that help with this.

Kube-virt provides a platform that engineers can build apps for both containers and virtual machines in the same environment. Essentially, it allows you to run virtual machines on Kubernetes.

This goes back to the hybrid approach from an earlier section. Some organizations aren’t going to want to move everything to the cloud, and to take it a step further, some organizations aren’t going to want to containerize certain legacy applications. However, what they will want is the “power” behind Kubernetes to manage legacy applications.

This is where tools like Kube-virt comes into play, and I’m certain that others will enter the market as well once Kubernetes becomes used more in production. If Kubernetes is truly a way to manage infrastructure with an API, that means we should be able to manage all infrastructure.

Cluster Management

Last but certainly not least, there’s cluster management. With everything from:

  • Hybrid-cloud
  • Multi-cloud
  • Clusters on-prem
  • Clusters in OpenStack
  • All of the different products and solutions

there’s a lot of infrastructure flying around all over the place.

Cluster management tools like Rancher and Azure Arc already exist, but they will become more relevant when organizations begin to incorporate multiple Kubernetes clusters. As of right now, the number is around less than 10% of organizations are running over 50 Kubernetes clusters. Because the number is so low, you have to imagine that a lot of organizations are running a few Kubernetes clusters at most, and that doesn’t mean they’re all in production.

Once Kubernetes becomes the “environment” instead of the “platform”, more and more clusters will be deployed, and cluster management tools will not be an option, they’ll be needed. Also, those cluster management tools will give you the ability to easily manage the internals of your clusters. For example, overall management of RBAC across all of your clusters instead of managing RBAC per cluster.

Top comments (4)

Collapse
 
prafulpatel16 profile image
Praful Patel

This is really a great prediction about the future of Kubernetes and what to focus as cloud engineer if someone has to really learn.

Collapse
 
adriens profile image
adriens

You made me discover AWS Outposts Family : thanks a lot for that.

Collapse
 
devopstales profile image
devopstales

hybrid container and vm solution. something like rancher harvester, but I thin a future is not kubevirt (vm in a container) but alternative engines like katacontainer or firecracker.

Collapse
 
thenjdevopsguy profile image
Michael Levan

Only time can tell! :)