DEV Community

Cover image for How Virtual Kubernetes Clusters Can Speed Up Your Local Development
Lukas Gentele for Loft Labs, Inc.

Posted on • Originally published at loft.sh

How Virtual Kubernetes Clusters Can Speed Up Your Local Development

By Fabian Kramm

Oh hey, a blog post about virtual clusters again. Maybe you have already heard of those in the context of multi-tenancy, or even jokingly mentioned to someone that some crazy folks are promoting Kubernetes inside Kubernetes.

So now you are probably thinking: why on earth should a developer that already struggles enough with using Kubernetes itself also want to deal with virtual clusters? The answer might surprise you, but I believe virtual clusters are actually a lot easier to handle than separate physical ones, and can have quite some advantages over local k3d, KinD or minikube instances.

If you work regularly with Kubernetes, you probably know the problem: you want to try out a new application, switch to another project to work on, or you didn't use your local Kubernetes cluster for a while and forgot what was deployed inside it.

Since working with a fresh empty cluster is much easier than reusing an existing one, you just reset the whole thing. For me, this happens quite a lot. I reset my local docker-desktop instance multiple times a day, and sometimes I want to work on multiple projects at the same time that might conflict because of their CRD's and operator dependencies (usually they aren't, but who has time to actually figure that out?).

KinD, k3d and minikube to the rescue?

Before you tell me that I'm doing it awfully wrong and should use a separate KinD, k3d or minikube cluster per project instead of resetting the docker-desktop instance over and over, I need to let you know that this approach also has its problems. Don't get me wrong, I love KinD, k3d and minikube (and all the other super tiny Kubernetes distros). They brought me to Kubernetes and still make it a breeze to get started. To be honest, without them, probably most CNCF project pipelines would be as useful as most of my hobby projects. However, if you regularly reset those or even run multiple clusters at the same time, you will have a hard time fighting disk space and resource overhead in your local docker installation (shout out to docker system prune).

The problem stems from the way those tools create Kubernetes clusters. You may have noticed that when creating a new KinD, k3d or minikube (docker driver) cluster, they will create a single node container that runs the whole Kubernetes cluster. In case of minikube and KinD this is a container containing the vanilla Kubernetes binaries, and in the case of k3d it's unsurprisingly k3s. The node itself includes everything that is needed for a small Kubernetes setup, including a separate systemd, containerd and usually some other cluster tooling. While this works well, it also has a couple of disadvantages: you need to re-pull all your container images inside the new cluster, communication across your local clusters is often difficult, and there is quite a lot of overhead involved running those clusters side-by-side.

So now you are telling me virtual clusters are the solution?

Obviously this blog post is about development with virtual clusters, so unsurprisingly yes, I do think that virtual Kubernetes clusters can be an improvement here. Let's take a look at what virtual Kubernetes clusters do differently than KinD, k3d and minikube to understand why they could be a good replacement.

The main difference is that a virtual cluster only replicates the Kubernetes control plane and not the node itself. It can't exist without a hosting cluster, so virtual clusters are never a complete replacement for a distribution like docker-desktop, KinD or k3d. They are rather a replacement for multiple instances of them. Think of a virtual cluster like a virtual machine. It also cannot exist without a physical one backing it. So instead of replicating a complete Kuberentes node with all its processes and underlying drivers like CNI or CRI, the virtual cluster reuses the nodes (or single node) of an existing Kubernetes cluster and only creates a tiny separate control plane for each virtual cluster.

This has the big advantage that you are now reusing many parts from the host cluster (the cluster where the virtual cluster is installed), such as the nodes, storage and network. So you can strip out most of the other processes needed to run a Kubernetes cluster, such as kubelet, kube-proxy, CNI & CRI drivers, containerd, systemd etc. Oh, and by the way this also means you can reuse all the already pulled images of the host cluster as well. Another nice benefit is that accessing an application of another virtual cluster is also super easy, as they share the same underlying network. 

To make this happen, the virtual cluster distribution just reuses existing distributions like k3s, k0s or even regular vanilla kubernetes binaries to deploy the control plane. So if you thought k3s is small, try a virtual cluster that uses k3s and disables 90% of it. Besides the control plane, a small hypervisor called the syncer is used to actually sync workloads created within the pure virtual control plane to the host cluster and thus turn the virtual cluster into an actual usable cluster. This sounds very complicated, but in reality it is quite simple and works well.

Show me or I don't believe it

If I have spiked your interest, you are probably now thinking: this sounds nice, but I don't want a solution that is difficult to use, I just want to run a single simple command to create and delete a cluster like KinD or minikube are doing. Good news, in the newest v0.10.0 release of vcluster, which is fully open-source and the most popular virtual cluster implementation, we have simplified the handling of virtual clusters to super simple one line commands.

So let's start by downloading the vcluster binary from the releases page or by using the tutorial in the docs.

Make sure you have a local Kubernetes distribution already setup (such as docker-desktop, rancher-desktop, KinD, minikube or k3d) and then run the following command to create a new virtual cluster inside it:

$ vcluster create my-vcluster
Enter fullscreen mode Exit fullscreen mode

Congrats, that's it, you just deployed your first virtual cluster. After a few seconds your vcluster should be ready to use:

$ kubectl get namespaces
Enter fullscreen mode Exit fullscreen mode
NAME              STATUS   AGE

kube-system       Active   40s

default           Active   40s

kube-public       Active   40s

kube-node-lease   Active   40s
Enter fullscreen mode Exit fullscreen mode

Now you can start using it and deploy an application inside the virtual cluster. For example, the infamous Kubernetes guestbook application:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook/all-in-one/guestbook-all-in-one.yaml
Enter fullscreen mode Exit fullscreen mode

Wait until the application has started:

$ kubectl wait --for=condition=ready pod -l app=guestbook 
Enter fullscreen mode Exit fullscreen mode

Then run the following command to start port-forwarding to it:

$ kubectl port-forward service/frontend 9080:80
Enter fullscreen mode Exit fullscreen mode

Then navigate in your browser to the page http://localhost:9080 to see the guestbook application in action. To jump back to the original cluster, use:

$ vcluster disconnect
Enter fullscreen mode Exit fullscreen mode

What’s interesting is that vcluster will create all synced resources inside a single namespace in the host cluster. Only a handful of core resources are actually synced to the host cluster and most other resources stay purely inside the virtual cluster. To view the synced workloads of the vcluster, run the following command in the host cluster:

$ kubectl get pods -n vcluster-my-vcluster
Enter fullscreen mode Exit fullscreen mode
NAME                                                     READY   STATUS    RESTARTS   AGE

coredns-76dd5485df-75jgf-x-kube-system-x-my-vcluster     1/1     Running   0          7m25s

frontend-f7d9c57d4-8wp44-x-default-x-my-vcluster         1/1     Running   0          7m13s

frontend-f7d9c57d4-d2trf-x-default-x-my-vcluster         1/1     Running   0          7m13s

frontend-f7d9c57d4-k6sb6-x-default-x-my-vcluster         1/1     Running   0          7m13s

my-vcluster-0                                            2/2     Running   0          7m35s

redis-master-857d99cc8-tr949-x-default-x-my-vcluster     1/1     Running   0          7m13s

redis-replica-6fd587fb56-gjht5-x-default-x-my-vcluster   1/1     Running   0          7m13s

redis-replica-6fd587fb56-mksx4-x-default-x-my-vcluster   1/1     Running   0          7m13s
Enter fullscreen mode Exit fullscreen mode

 

You see that workloads will be renamed by vcluster to ensure multiple pods with the same name don’t conflict within the same host namespace. To learn more about what resources actually do get synced to the host cluster, you can take a look at the docs

After you are done with the vcluster, cleanup everything in the host cluster by running:

vcluster delete my-vcluster
Enter fullscreen mode Exit fullscreen mode

 
And that's it, you started a virtual cluster, used it and then got rid of it in just about a couple of minutes. 

Let's wrap it up

A fresh Kubernetes cluster is always nicer to work with than an already existing one. Virtual clusters now make it quite easy to use them not only in complex multi-tenancy environments, but also locally in your testing or development cluster.

Virtual clusters cannot exist on their own, without a host cluster, but they can be a good alternative to running multiple instances of KinD, k3d or minikube side-by-side. They are more lightweight, easier to access, and also faster than complete separate Kubernetes clusters. So if you are getting annoyed at resetting your local or CI/CD Kubernetes clusters constantly, try using a virtual cluster instead.

Top comments (0)