I recently talked to a Head of Engineering at a 700-person e-commerce company who just started her new role. While hiring and planning for the fourth quarter are high on her ever-growing list of things to do, she is also investigating reducing her organization's management overhead and cloud spend.
Due to this, she is considering adopting Kubernetes in production environments for her organization. At this point, some may be surprised at “Kubernetes” and “cost-savings” existing in the same sentence, but she and many other engineering leaders are seriously evaluating Kubernetes for this use case. A 2021 survey from CNCF reported that 96% of organizations are either using or evaluating Kubernetes as a means to reduce cloud spending.
With the adoption of Kubernetes is on the rise by most organizations, it’s evident that it’s an extremely popular and useful open-source project. But, as is the case with most projects, there are advantages and disadvantages that you should take into consideration.
Over the last year, I have spoken with dozens of engineering leaders to learn more about their experiences with the popular container orchestration software.
The fastest way to build great infrastructure
Plural empowers you to build and maintain cloud-native and production-ready open source infrastructure on Kubernetes
🚀 🔨 ☁️
Plural will deploy open source applications on Kubernetes in your cloud using common standards like Helm and Terraform.
The Plural platform provides the following:
- Dependency management between Terraform/Helm modules, with dependency-aware deployment and upgrades.
- Authenticated docker registry and chartmuseum proxy per repository.
- Secret encryption using AES-256 (so you can keep the entire workflow in git).
In addition, Plural also handles:
- Issuing the certificates.
- Configuring a DNS service to register fully-qualified domains under onplural.sh to eliminate the hassle of DNS registration for users.
- Being an OIDC provider to enable zero touch login security for all Plural applications.
We think it's pretty cool!
☁️ Build and manage open cloud-native architectures
The plural platform ingests all deployment artifacts needed to deploy…
Here is what I have learned.
Before we dive into the pros and cons of using Kubernetes, we should briefly explain why Kubernetes is so popular nowadays.
Kubernetes, also known as K8s, recently turned eight years old. In those eight years, it has changed the modern engineering landscape. Kubernetes is an open-source industry standard for delivering containerized applications with an inherent microservices architecture.
Before Kubernetes, everything was bespoke. Engineers managed distributed systems using an assortment of one-off cloud consoles, bash scripts, and Python scripts. If you have any experience wrangling distributed systems in the past decade, you likely have encountered these exact quirks and frustrations that come along with doing so.
Kubernetes brought standardization to distributed systems – something that was desperately needed. Developers finally had a way to describe deployment logic, configure management, networking, RBAC policies, and ACL rules that were interchangeable across either an on-prem setup or with any cloud provider.
Thanks to Kubernetes, you can declaratively specify what your infrastructure should look like within a set of YAML files. You can then package those files up and then move them around between different clusters, providing some desperately needed portability.
The most obvious argument against using Kubernetes is that organizations often don’t need the high availability that Kubernetes provides and don’t have multiple containers to deploy. While these are valid, they are generally surface-level concerns.
The most common systemic argument against using Kubernetes is that it’s overly complex to get up and running. For starters, the learning curve is steep. There is also an overwhelming amount of content covering a variety of topics, often leaving developers puzzled as to where to start when learning Kubernetes.
My Kubernetes journey started at PlanetScale, where I was working on early versions of their database as a service that deployed on Kubernetes. I found myself constantly spinning up clusters, getting familiar with common debugging workflows, and testing locally with minikube and kind. I carried the pager for our production clusters and learned how to debug issues from Kubernetes experts. In addition to Ops, I had to directly interface with the K8s API when developing, which provided some useful context and depth to my understanding of the technology.
Where does this leave the aspiring student of Kubernetes? Everyone learns in different ways, but I’ve found that there are few substitutes for hands-on experience here. This is a genuine concern, and a lack of access to real experience is sometimes (but not always) a deal breaker.
The bright side is that every engineering leader I spoke with emphasized that you are going to only use about 10 resources within the Kubernetes API consistently. Over time you’ll slowly learn how all those work and fit together. Here are some resources to explore if you aren’t sure where to begin:
- Get minikube up and running on your machine
- Convert a docker-compose file to a running Kubernetes service
- Watch Kunal Kushwaha’s Kubernetes tutorial for beginners
- Read an article about how to write YAML
- Find specific articles in this list of written resources
At the end of the day, if you are unable to find experienced engineers that understand running Kubernetes in production, setting it up and managing it yourself is not recommended.
Another common argument engineering leaders made against the adoption of Kubernetes is that smaller engineering teams have a hard time deploying and managing Kubernetes clusters. In fact, they found that over the years most of the organizations they have seen that do end up adopting Kubernetes have entire DevOps teams dedicated to dealing with the complexity that comes from doing so. Smaller engineering teams do not have this luxury, and the experience shortage we talked about previously becomes increasingly relevant here.
For Kubernetes to not be a hassle for your engineering team, you should at least have a team dedicated to managing clusters, or at a bare minimum, a dedicated engineer with years of experience with Kubernetes.
However, with budgets decreasing for companies worldwide, the opportunity cost for hiring a person or team to manage Kubernetes may be prohibitively high, leading many to stick to the more comfortable world of VMs.
For starters, Kubernetes is open source, has a strong community, and has attracted excellent contributions from vendors in the cloud ecosystem. On top of that, Kubernetes allows you to run your application on multiple cloud providers or a combination of on-premises and cloud, thus allowing you to avoid vendor lock-in.
Every engineering leader I spoke with agreed that Kubernetes is an extremely powerful tool, and developers at companies of all sizes can immediately reap the benefits of using it for their projects.
Kubernetes has built-in self-healing for all of your running containers and ships with readiness and liveness checks. When containers go down or are in a bad state, things often return to the status quo automatically or with plug-and-play debugging workflows.
If you are looking to save money while running infrastructure at scale, Kubernetes can make sense for your organization. Kubernetes has auto-scaling capabilities, allowing organizations to effectively scale up and down the number of resources they are using in real time.
Additionally, If you are moving from VMs to containers, you will reap the lower maintenance costs of containers, as consistent deployments and portability will reduce friction within your organization.
Due to the advent of containerization, the plug-and-play nature of open-source software is more powerful than it ever has been. Adding software to your Kubernetes cluster can be as simple as copying some configuration and running a few commands in your terminal. Open-source alternatives to managed software can net you meaningful cost savings in the long run and will receive networking benefits from being collocated.
Moving toward running OSS software on Kubernetes from paying for disparate managed services is a lucrative opportunity to save costs and may be one of the biggest benefits of using Kubernetes as your business begins to mature.
Ultimately, we believe that Kubernetes is oftentimes worth the investment (in terms of engineering resources) for most organizations.
If you have the right engineers, enough time, and resources to effectively run and upkeep Kubernetes, then your organization is likely at a point where Kubernetes makes sense. I understand that these are not trivial prerequisites, but if you can afford to hire a larger engineering team you are likely at a point where your users heavily depend on your product to be operating at peak performance constantly.
However, if you are looking to deploy your application into production and are short on either time or engineering resources it might be better to stay clear of Kubernetes for the time being.
Especially if you are not familiar with Kubernetes and are short on time, it likely is a bad idea to hack away at it and misconfigure a cluster exposing sensitive information that could lead to data exfiltration and other hacking attempts.
However, if you’re considering deploying open-source applications onto Kubernetes, it has never been easier to do so than with Plural.
It requires minimal understanding of Kubernetes to deploy and manage your resources, which is unique for the ecosystem.
This post was co-written by Abhi Vaidyanatha our Head of Community