DEV Community

Carlos M. for Scalyr

Posted on • Originally published at scalyr.com

Kubernetes: The Next VMware?

It's been almost 10 years since VMware started selling ESX version 4.0. This set the path for VMware to dominate more than 75% of the virtualization market in 2017. Gartner considers this market "matured" since most of its revenue comes from maintenance instead of new licensing. Many companies have consolidated their workloads with virtualization, but there are new problems to solve.

Delivering, testing, deploying, and scaling applications are among these challenges. Teams that implement microservices also need to automate as much as possible to make them manageable. Kubernetes, Marathon, Swarm, and Nomad compose a new breed of tools that respond to these needs through orchestration. If you host on-premises or in the cloud, consider them to help your business more quickly deliver code to production.

Companies evolving towards data-driven decision-making often implement machine learning and business intelligence tools, looking for an edge in their markets. As information technology professionals, it's our responsibility to make sure our businesses select tools that

  • perform in a reliable way;
  • allow quick deployment of new features;
  • scale properly in response to user demand; and
  • deploy new software in a safe and reproducible way.

In this article, I explain why I think Kubernetes is a market leader in the orchestration space and how it might steal VMware's thunder in the not-so-distant future.

Kubernetes Is Open-Source

The Kubernetes adoption rate is considerably higher than that of other tools, partly because it's distributed by the CNCF under Apache License 2.0. Choosing an open-source tool comes with the following benefits:

  • You will have to train your staff in implementation and maintenance, but you will incur zero expenses in commercial licenses. OpEx-oriented managers love this.
  • Developer contributions accelerate its evolution. Open-source tools release new features in a three-to-four-month cycle, which is faster than commercial software.
  • Documentation, tutorials, and how-tos are abundant all over the internet. This creates a healthy and diverse ecosystem of knowledge built collectively.
  • Providers and software vendors offer their own implementations freely. This enables organizations to implement and assume a level of complexity aligned with their IT staff.

Containers Are More Efficient Than Consolidation

When we consolidate workloads in virtual machines, we're still emulating sets of devices that we mostly don't use. Duplication of operating system files is also a prevalent inefficiency. Using containerizing services (e.g., Docker) reduces glut and dedicates more CPU cycles to your business applications.

Dockerizing applications have been shown to be effective in this race to optimize hardware usage. Kubernetes provides a way to manage Docker instances at scale instead of putting them inside virtual machines. If your workloads are Linux-exclusive, you also have the option to run Kubernetes on bare-metal servers. It's not the most popular implementation, but it's a way to maximize hardware efficiency.

Containers Allow Teams to Focus on Delivery, Not Management

Priorities have changed when it comes to the application lifecycle. Configuration management is a takeaway from the era of consolidation. Managing lots of virtual machines used to require expending consistent effort. But spending energy on configuration management is unnecessary in the era of containers. A Docker instance's lifetime is so short that some instances run for just a few hours.

That's why you should add configuration as part of your continuous deployment. Test it automatically in your pipeline and send your configured artifacts to production after passing all your compliance tests. This helps your organization focus their efforts on delivering more stable software, with more features, and to automatically catch errors earlier in your development process.

Kubernetes Pods Let Teams Build With the Same Lego Pieces

Kubernetes applications run in the form of Pods. A Pod may contain one or more containers. Pods help you to group containers that hold a relationship between them. You use a Pod template to specify storage, binaries, and network resources used by its containers. And to define Pod templates, you use the YAML text file format.

Share these YAML definitions with your developers so they can run Pods with minikube in their desktops. You can also use them in your CI/CD pipeline. That's how Pods enable you to provide consistency across your development, testing, integration, and production environments.

Standardizing how your teams create and test software will help you minimize the changes you need to implement at the end of your software development cycle. Operators will also have more confidence when deploying these applications to their production environments. For VMware users, this should sound similar to vApps packaging.

Helm Lets You Save Time by Reusing Community-Packaged Applications

Helm packages in Kubernetes may be the parallel of RPMs in RedHat or .deb packages in Debian/Ubuntu. A YAML file called "Chart" contains a definition and templates to describe the necessary binaries. A server (Tiller) and a client (Helm) interact to deploy releases of your charts to a Kubernetes cluster. Charts require semantic versioning. Couple this with version control and you get rollback capabilities.

Most teams usually have similar needs: "Deploy this application, keep this database up and running, cache content with this app, and load-balance this set of connections." Free open-source software (FOSS) is usually a part of this team's process. Helm packages exist so you avoid rework when implementing FOSS as Kubernetes applications.

You're now probably thinking, "How do I make Pods work with Helm packages?" There's a whole set of best practices to consider when you start modeling your infrastructure with Helm packages. Make sure to review them before making architecture deployment decisions.

Kubernetes Deployments—Safe Software Delivery Included

Kubernetes Deployment manifests allow you to describe how Pods are kept running under a set of rules. You define these rules in YAML files, which allow you to describe an environment with these basic abstractions:

  • Containers describe what applications run in your deployment.
  • Environment variables set values that are required to run your code.
  • Specs allow you to define how many replicas of each container should run at a certain time.

On the other hand, Kubernetes services are meant to define ports available to external entities. That's how you expose your application to the rest of the world.

Now you can build an environment by providing your containers with variables, binaries, computing power, and network access. Everything is defined in plain text files, which should be stored in a central repository using version control. This is the Kubernetes formula. By using it, you can provide consistent environments to all of your teams, and your software may be put into production under deployment methods like blue-green, without extra necessary work.

Kubernetes Orchestration Enables You to Implement Microservices

Teams migrating software from monolith architectures to microservices should pay special attention to this point. Stateless and stateful applications work under different infrastructure assumptions. Make sure your developers are aware that some containers will fail and some Pods will die. If your organization is in the process of transforming to DevOps, it's vital for developers to have full visibility into your production environment.

When designing software to run on Kubernetes look at these relevant principles, which I extracted from "The Twelve Factors" from The Twelve-Factor App:

Once you get on the microservice bandwagon, you need to run it on a stable infrastructure, with a community where you can contribute or get support. Kubernetes is proving to be a FOSS platform that may grow enough to be as dependable and powerful as orchestration tools from commercial vendors like VMWare.

"Code, build, and deploy often" is the new gospel of IT. Kubernetes has built-in features to help teams make all this happen in a safe way. While I recognize that it will be some time until it reaches 75% industry adoption, Kubernetes is already a leader in the field of container orchestration. It's a trending technology that will consolidate as a de facto industry standard, just like VMWare did in its time with virtualization.

Top comments (0)