With new iterations of technology, or rather, with HOW you interact with technology comes new terms, focuses, and implementations. One of the new “iterations” (or maybe not so new) is Platform Engineering.
In this blog post, you’ll learn about why I believe Platform Engineering on Kubernetes will be something that all engineers will be focusing on for years to come.
The term “new” comes in all shapes and sizes. The ironic thing is that “new” is typically never “new”. It’s just an iteration of what already existed.
When thinking about Platform Engineering, the concept isn’t new. The only thing that’s “new” is the term “Platform Engineering”. The truth is, Platform Engineering has been around for a very long time. Engineers (including myself) have been building self-service capabilities for developers, internal engineers, and external engineers for years. It could’ve been something as simple as a CLI, a wrapper, or a script that made someone's life easier when interacting with a particular platform.
The only difference is now it has a name and that’s a good thing. If something is beginning to be looked at as “new and emerging”, but it’s been around for a long time, that means the idea behind it and the execution actually makes sense, people are using it, and it can become more than just a fad.
The platform that sits underneath the stack can be anything. It could be Kubernetes or VMs. It could even be bare-metal servers.
However, with the capabilities (Argo, Flux, Istio, etc etc) coming out around Kubernetes, it’s a safe bet that Kubernetes will be the underlying platform.
The enablement of this comes from:
- The ability to manage containerized applications.
- The ability to manage VMs (KubeVirt).
- The ability to create resources outside of Kubernetes with Kubernetes (Crossplane).
Here’s the thing about Kubernetes - it wasn’t anything new. There were ways to manage containers for years. A lot of organizations even created their own orchestrators and schedulers. However, Kubernetes did bring us something new.
How we could extend the management.
The key to the success of Kubernetes was the extendable methods of the API. You want to use something in Kubernetes that doesn’t already exist? Create an Operator. You want to create a tool to be used in Kubernetes and want the ability to manage that tool with a Kubernetes Manifest? Create a CRD.
Kubernetes gave us the ability to extend capabilities in a way that we didn’t see before and that’s why it’ll be the underlying platform of choice.
When thinking about overall capabilities, it’s all about the available tools. ArgoCD, KubeCost, Istio, and the hundreds of other tools in the cloud-native world.
As we’ve been seeing for a few years now, Kubernetes is still the underlying platform, but it’s not what everyone is talking about. Everyone is talking about the tools (platform capabilities) that exist and can be used on Kubernetes.
Kubernetes is the “boring” piece now (which is a good thing) and the tools being built to run on Kubernetes are the “new hot thing”.
The reason why platform capabilities/tools make sense to focus on and overall why it makes sense for Platform Engineering is because:
- These tools will continue to be created and maintained.
- New tools will keep popping up.
- Most importantly, engineers will need a way to interact with these tools without becoming experts in the tools.
The third point above is the most important.
Engineers need a way to use the tools without being an expert in them. They need to use the tool's capabilities, but they don’t need to “use the tools”. That’s where having a platform interface comes into play. You’re using the tools at an abstracted level which gives you all of the capabilities of the tools without having to actually know the tool exists or without becoming an expert in it.
These tools are going to continue to be created and they need a way to be used at a level that engineers don’t need to become an expert in them. Why? Because with all the options available, it’s impossible to expect engineers to know them all inside and out.
As mentioned in this blog post, Kubernetes didn’t bring us anything new from a functionality perspective. Scheduling containers was something engineers had access to for 10+ years before Kubernetes. The key difference is HOW Kubernetes did it.
The “how” behind it, which is the extendable API, is why Kubernetes became so popular.
We can now do anything from manage VM’s in Kubernetes to using it for the creation of resources outside of containers (like an Azure vNet or an AWS S3 bucket).
Having a platform underneath the hood that has the ability to be 100% fluent and extendable for anything we need it to do is the true key to iteration.