DEV Community

Cover image for 5 Ways to Access Kubernetes Clusters
Lukas Gentele for Loft Labs, Inc.

Posted on • Originally published at loft.sh

5 Ways to Access Kubernetes Clusters

By Hrittik Roy

Kubernetes stands out as one of the most popular container orchestration tools currently available, with 5.6 million developers using the orchestrator by the end of 2021, a 67% increase from the previous year.

A Kubernetes cluster consists of all the components (control plane, nodes, objects, and more) that orchestrate your containers for reliability, availability, and scalability. Naturally, to run and manage your resources and containers, you need to be able to access the Kubernetes cluster. This brings us to a critical question: How should you and the teams across your organization access your cluster?

In this post, you will get to know a variety of popular methods for accessing your Kubernetes cluster. This list has been compiled based on the popularity and stability of the tools, among other factors. So without further ado, let’s take a look at five ways to access a Kubernetes cluster and explore the benefits and drawbacks for each.

Client-Side Tools like kubectl or Helm

If you’ve ever looked up how to access a Kubernetes cluster, one of the top results you’ll see is regarding a client-side tool called kubectl. This Kubernetes command-line tool talks to the Kubernetes API and executes your commands. It’s straightforward to install on your machine and only needs a kubeconfig file containing the certificates to authenticate the communication between your kubectl utility and the cluster.

Helm is another popular client-side tool that helps you easily create and deploy complex Kubernetes applications. It provides reproducibility of deployment with the help of charts that describe the resources your Kubernetes API should deploy.

Easy to Install

Being easy to install and use is a benefit often ignored in this complicated world of distributed systems. These client-side tools provide you with an effortless way to manage your Kubernetes clusters and get the resources you want, though it comes at the cost of knowing the commands and different Kubernetes objects—more on that below.

Powerful

The ability to talk directly to your cluster through the Kubernetes API comes in handy, as this gives you an edge over other tools when diagnosing your cluster. With a few commands, you can analyze and pinpoint specific issues or deploy (or destroy) the entire infrastructure.

Steep Learning Curve

However, to use these tools efficiently, you must have a working knowledge of Kubernetes architecture and objects. If you only want to see the status of one deployment, knowing details about Kubernetes should not be necessary, but it becomes essential when you start dealing with these client-side tools, as there’s no abstraction available.

For example, you won’t be able to deploy infrastructure without knowing what deployment or pods are. You can use Helm to deploy your infrastructure, but you’ll need to understand templating and master the learning curve in order to do so.

Complicated Access Sharing

Another disadvantage is that you need to configure your certificate to limit access for using these tools via RBAC. Setting up RBAC is not very straightforward and requires you to know how Kubernetes works. Next, you need to securely share the kubeconfig file with the user who wants to use the client-side tools.

In the wrong hands, privileges offered by your kubeconfig file can put your cluster at risk. Furthermore, managing the certificates becomes quite complex if you are dealing with more than one cluster.

Manual and Repetitive

Last but not least, it’s not efficient to deploy your continuously updated application using these tools. You need to update your application manually, and having a continuous process here becomes necessary.

Manually updating your manifest every time there’s some new change in the application can quickly become overwhelming. It’s highly inefficient, as it’s prone to errors and is not scalable—there’s a limit to how many times a day you can manually update your manifest. For obvious reasons, automation is preferable.

CI/CD Tools like Jenkins

If you have an application running on a Kubernetes cluster and need it to update as soon as your new changes are pushed on your code, you need a CI/CD pipeline. CI stands for continuous integration and focuses on integrating your latest commits to a repository, like Docker Hub, after tests and checks. CD stands for continuous delivery/deployment, where you automatically update your application on Kubernetes after it’s integrated with the previous stage.

Jenkins is an open source tool for automating tasks that aids continuous integration and delivery by automating the elements of software development related to building, testing, and deploying. It’s one of the most popular choices when dealing with CD pipelines.

More Features

With CD tools, one major benefit is that you don’t need to manually access the cluster and deploy changes by updating manifests, like you do with client-side tools. This makes the process continuous, and your features reach the market faster as you don’t need to make manual updates to get the job done.

Less Time

The other benefit of using CI/CD tools like Jenkins is that they reduce bottlenecks and create an efficient system that saves time. As a developer, you can just log in to Jenkins and find what you need—you don’t need to access the cluster and look into the manifest to figure out the version of your application that is deployed.

Also, updates get deployed quickly because your pipeline is automatic and doesn’t require any manual intervention from your team.

Complicated Configuration

However, setting up your CI/CD tools to access your cluster isn’t that straightforward. At first, you need to install and set up tools that would help your CD deploy, like kubectl for your cluster. Then you need to configure access to your cluster so your tool can access the cluster resources.

This process quickly becomes complicated, as you need to know Kubernetes very well in order to configure the pipeline. Moreover, you will need to switch tools again in order to successfully deploy your application.

Vast Security Scope

As the configuration needs many access layers (cluster access, cloud access tokens, and CI/CD manifests) to configure your pipelines, this also brings a vast security scope into the picture. You need to manage your security efficiently, as there are many points of contact that can hinder your applications and security if not monitored well.

Limited Deployment Status

Finally, Jenkins applies and updates your manifests using kubectl, which doesn’t provide you with any feedback as to whether the deployment is working or if it has crashed or run into any number of errors. Moreover, if you change something on your cluster, it won’t be reflected on your CD, which can cause confusion and make diagnosis harder. You don’t want to access your cluster without any knowledge of what’s running on it. A successful update should be reflective, but traditional CI/CD tools fail to accomplish this critical requirement.

GitOps Tools like Argo CD

CD tools like Argo CD were built specifically for Kubernetes and follow GitOps principles. Argo CD treats your repository as a single source of truth and follows a pull-based approach to pull changes from your repository.

“The core idea of GitOps is having a Git repository that always contains declarative descriptions of the infrastructure currently desired in the production environment and an automated process to make the production environment match the described state in the repository.” –GitOps Tech

Single Source of Truth

If you have ever logged into your Kubernetes cluster and been confused about what application is running, then GitOps is the solution to you never facing that issue again. The automated CD follows manifests in your source control and updates your cluster environment according to the changes you present in the manifests. You know at any point exactly what resources are running inside the cluster.

This approach helps everyone in your team get an audit trail, as all commits that change the state of the cluster are there in the source control, and it makes diagnosis easy whenever the need to do so comes.

Agile

Rollbacks happen when a sudden change makes your application go down and you need to restore the previous state. With other methods discussed in this post, this can be difficult, but with GitOps, a simple git revert does the work for you.

Complex Workflows

On the other hand, your deployment manifests are stored in your repository, and thus, your number of Git repositories increases with every new application. Managing manifests at scale can quickly become a hectic task, as you need to connect the sync agents on different clusters.

“In a complex enterprise environment, a team I worked with spent more than 30% of the development time building automation for provisioning GitOps repositories.” –Container Solutions

Tricky Secrets Management

Furthermore, in a GitOps workflow, secrets are stored in the repository, which produces some risks. Your secrets stay forever, and as your repository grows, it becomes tricky to manage and maintain these secrets. If you need to change a secret, it may take a lot of time to find it and replace it.

Developer Tools like DevSpace

Developer tools, like DevSpace, are designed to help develop your software faster and without the complexity of other access methods. These tools make adoption rapid with a lot of features. In short, it makes a developer’s life simple without much bother regarding infrastructure.

Efficient Process

Developer tools help you quickly orchestrate the deployment for changes in different environments. You build and develop your application inside Kubernetes instead of porting the application to cloud-native form after it has already been created.

For DevSpace specifically, the workflow knowledge is stored in devspace.yaml, a declarative configuration file that contains knowledge of building images, defining dependencies, and deploying your project. This makes it easy for your teams to reproduce an environment and keep building the application.

Simple Workflow

With DevSpace, you don’t need any working knowledge or expertise with Kubernetes. DevSpace reads the configuration file and builds images while you focus on developing your application—no more complicated manifests. DevSpace is the only tool that offers this benefit.

The best part? You can run your changes on the local Kubernetes environment and don’t need to commit your changes to see if they work. Just run devspace dev and let the tool handle the rest.

Faster Iterations

Traditionally, you need to create an image from the manifests and then change your Kubernetes manifests to see how the application works. Now, with hot reloading, your containers are updated when you’re coding in real time with bidirectional file synchronization. This helps you see updates without rebuilding containers or images. If you want to publish the changes, it’s as simple as using devspace deploy.

Simple UI

One of the other benefits of DevSpace is having a UI that helps you with namespace inspection, log streaming, status monitoring, and interactive terminal sessions. The UI makes sure the developer gets access to all the resources they need for completing their work, simplifying the work of the developer.

CD Drawbacks

DevSpace provides you with an environment to rapidly develop and iterate through your code changes. However, DevSpace works on your local machine to synchronize changes, and if there are multiple teammates, you might need to manage conflicts between changes.

A good alternative would be to use GitOps tools, like Argo CD and Flux, to deploy your application after building and testing the application locally via DevSpace and pushing the changes to your remote repository.

Kubernetes Dashboard

Kubernetes Dashboard is an official web-based offering from the Cloud Native Computing Foundation (CNCF) that you can deploy on your cluster with the following command:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
Enter fullscreen mode Exit fullscreen mode

You can access the dashboard, which is deployed as a pod, with kubectl proxy or any other services, like Ingress, LoadBalancer or NodePort, per your requirements.

Powerful Interface

The GUI is very powerful and has features ranging from creating a simple deployment to managing your cluster. The dashboard is built for admins, and while it’s not a replacement for the other tools discussed in the post, the dashboard allows you to look into your application stats quickly without running various commands. This allows you to find the errors quickly when something goes wrong.

You can use it to deploy an application, but that’s an approach not used by many as it doesn’t have the advanced features available in some of the previous methods. The monitoring part of the dashboard doesn’t provide granular control, and open source tools, like Prometheus with Grafana, are widely used instead because they have been built explicitly for visualization and monitoring.

Access Views

The dashboard offers various views:

  1. Admin View
  2. Storage View
  3. Workload View
  4. Service and Discovery View
  5. Config View

These views come in very handy when you need a user to access a specific part of the application while following the principle of least privilege.

Need for Configuration

Installing the dashboard is straightforward with a command, but you need time to set it up for the access configuration and useful UI visualization. The process is complicated, as it involves finding secrets and configuring your dashboard with a variety of roles. Next, you would need to deploy metrics configured to visualize the graphs of CPU usage, workload status, and so forth.

Security

Finally, exposing the ports to the public internet via a load balancer service might not be a safe option when you use Kubernetes Dashboard. Your port will act as a target for attackers if there’s no level of abstraction securing your dashboard. Make sure you have RBAC enabled for the service account used to access the cluster. You shouldn’t grant the service account root access or else your deployments and cluster can be at risk of attacks.

Another option would be limiting IP access by configuring virtual nets and only allowing specific IPs to access the dashboard. If you want to read more, check out this blog about how Tesla’s clusters were compromised and used for crypto mining due to a publicly exposed dashboard.

Conclusion

While each access method has some drawbacks, they all come with a variety of benefits. Depending upon the size and deployment strategy of your organization, you can evaluate the options to determine the best one for your own use case.

Some tools make your deployment simple, some make your testing simple, and some give you a high-level overview of everything that’s happening inside your cluster. Many organizations opt to choose a couple of access methods to get the best results. Whether you decide to go for one tool or a combination of a few, be sure to look out for the options that make the most sense to your organization, paying attention to factors like ease of use and security.

*Photo by Tom Fisk from Pexels*

Top comments (0)