My journey in this space started in 2015. At small meet-ups and local conferences, I heard whispers about containers and this thing called Kubernetes. It was this abstraction that was simple enough to grasp in a sitting. Kubernetes - okay, you’ve got a pod which contains an app and the pod has some labels attached to it and then you’ve got a service. The service is a networking abstraction. It routes traffic to your pods via label selectors. If you match up the service’s label selectors to your pod’s labels, then traffic will go to the right place. There are controllers in Kubernetes that ensure you’re running what you meant to run and a networking overlay that facilitates inter cluster communication. Also, become a YAML expert. Great. Now what? Well, actually a lot.
Interest in Kubernetes picked up like wildfire and it became even more accessible with the the creation of the Cloud Native Computing Foundation (CNCF). An influx of ideas turned into code and contributions. We had to think about storage and role based access control and sidecars and the interoperability of all kinds of abstractions to make things work in a diverse set of environments. Interoperability was a key word often emphasized by Brian Grant (original lead architect of Kubernetes at Google), who advocated for a world where people and projects could work together rather than break off into disparate branches.
The fire continued to blaze onward. We created SIGs - Special Interest Groups - to gather people weekly or bi-weekly to discuss specific areas of interest. I co-created and co-led SIG-Apps. My interest was figuring out how to make it easy to build, install and manage applications in Kubernetes and the tools we needed on top of Kubernetes. I contributed to Helm and Draft in particular around this time as there was a surge of tools in the space. More and more people gathered and discussed and demo’ed and proposed. More processes and automation bots appeared.
But it turns out you can’t just keep churning at that pace without bottlenecks. To ensure that we were able to continue trusting the Kubernetes codebase without hindering progress, there came about a focus on extensibility mechanisms with aggregated API servers and Custom Resource Definitions (CRDs). Shoutout to the good folks at Google and RedHat for making this happen. I think a large part of the success of Kubernetes is because there was an emphasis on communication and a belief that there is power in a diverse community and that figuring out how to work together is worth it. To me, the extensibility features of Kubernetes are a product of these fundamental values.
And it is only because of the focus on extensibility and interoperability that today, we can run WebAssembly workloads in Kubernetes so seamlessly. SpinKube is an open source stack of projects for running WebAssembly applications. A core piece of the stack is a containerd shim. I remember when containerd was donated to the CNCF in 2017. That took work and collaboration from several companies, most notably Docker, to make happen. SpinKube also depends on CRDs and operators. I recall seeing one the early demos of scaffolding an operator and a CRD in a SIG meeting from Phillip Wittrock, who went on to work on Kubebuilder in a Kubernetes SIG. Kubebuilder is a key piece of SpinKube’s Spin operator development. As I reflect on the last decade, I appreciate every contribution even more deeply.
Today, we celebrate the 10th anniversary of Kubernetes. When I look around, I’m really proud to have had the privilege to participate in this space. I’m especially thankful for the focus on collaboration and community and for the technology that remains aflame a decade later.
Michelle Dhanani
- Co-founder, Kubernetes SIG Apps
- Co-chair KubeCon/CloudNativeCon 2016-2017
- Member, Kubernetes Steering Committee 2018-2019
- Developer Representative, CNCF Governing Board 2017-2021
- Member, CNCF Technical Oversight Committee 2019-2021
- Emeritus Maintainer, Helm and Draft
- Maintainer, Spin and SpinKube
Top comments (0)