DEV Community

Cover image for How to prevent computer overload with remote kind clusters
Tomer Figenblat
Tomer Figenblat

Posted on • Originally published at developers.redhat.com

How to prevent computer overload with remote kind clusters

Kubernetes can require a lot of resources, which can overload a developer's laptop. This article shows you how to use a set of tools—including kind, kubeconfig, and Podman or Docker—to spread your files around remote systems in support of your local development work.

Why I researched tools to prevent computer overload

Lately, I've been working a lot with Open Cluster Management, a community-driven project focused on multicluster and multi-cloud scenarios for Kubenetes applications.

The Open Cluster Management topology is hub-spoke based, calling for one hub cluster and at least one spoke cluster. That means that, throughout my work, I needed at least two clusters running simultaneously.

The quickest way to get two clusters up and running for development purposes is to use kind, a Kubernetes management tool. With kind, you can easily spin up Kubernetes clusters running in containers on your local computer.

One of my tasks included working with Prometheus, so I needed multiple clusters running the Prometheus operator plus the required operators for running Open Cluster Management, including the Application Lifecycle Manager addon. The load on my local computer was, eventually, too much for it to handle, and it eventually stopped cooperating with me.

To work around this bottleneck, I decided to spread my kind clusters to multiple computers around the office, import their kubeconfig files to my local computer, and continue to work as if the clusters were local.

Each of the remote computers needed to install kind as well as a container engine. To manage the containers I used Podman, but Docker should do just as well.

For access to remote clusters, SSH is usually preferable, but any manner of getting access to them should suffice. After spinning up a kind cluster and exporting the relevant kubeconfig file, you will no longer need access to the remote clusters, with the exception of the designated port 6443 for access to the Kubernetes API server.

How to set up kind remote clusters

The remote computer's IP address I use in the following examples is 192.168.1.102.

Assuming you have SSH, connect to the remote computer as follows:

$ ssh 192.168.1.102
Enter fullscreen mode Exit fullscreen mode

Create a custom kind cluster using the following command. Note the networking property, which is required to make your cluster's API server listen on the right address so you can reach it from your local computer on the same network:

$ kind create cluster --config=- << EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: remote-cluster1
nodes:
- role: control-plane
networking:
  apiServerAddress: "192.168.1.102"
  apiServerPort: 6443
EOF
Enter fullscreen mode Exit fullscreen mode

Now, still on the remote computer, use kind to export the cluster configuration into a file of your choice; the following command names the file remote_kube_config:

$ kind get kubeconfig --name remote-cluster1 > ~/remote_kube_config
Enter fullscreen mode Exit fullscreen mode

Now go back to your local computer and copy your current configuration into a file that I'll call local_kube_config. This file can also serve as a backup:

$ cp ~/.kube/config ~/local_kube_config
Enter fullscreen mode Exit fullscreen mode

Then run the following command to copy the remote configuration to your local computer over SSH:

$ scp 192.168.1.102:~/remote_kube_config ~
Enter fullscreen mode Exit fullscreen mode

Now merge the two configuration files. Note that if you have many remote clusters, you can include multiple configuration files in the following command:

$ KUBECONFIG="${HOME}/local_kube_config:${HOME}/remote_kube_config" kubectl config view --flatten > ~/.kube/config
Enter fullscreen mode Exit fullscreen mode

Verify access to your remote kind cluster from your local computer:

$ kubectl get nodes --context kind-remote-cluster1

NAME                            STATUS   ROLES           AGE   VERSION
remote-cluster1-control-plane   Ready    control-plane   19m   v1.25.3
Enter fullscreen mode Exit fullscreen mode

The output shows that you have access to the cluster.

Bonus: Loading images to remote clusters

When you need to load images from your local storage to a local kind cluster, you can take advantage of the following useful command:

$ kind load docker-image <image-registry>/<image-owner>/<image-name>:<image-tag> --name local-cluster
Enter fullscreen mode Exit fullscreen mode

But when working with remote clusters, this process gets tricky. In the previous section, you made kubectl aware of your remote cluster by merging its kubeconfig configuration, but your local instance of kind has no idea who remote-cluster1 is.

Images can be loaded only to local kind clusters. This means that to load a file onto a remote computer, you need to get your image into your remote computer's storage and load it from there.

To do that, first archive your image:

$ podman save <image-registry>/<image-owner>/<image-name>:<image-tag> -o archive-file-name
Enter fullscreen mode Exit fullscreen mode

Then copy the archive to your remote computer:

$ scp archive-file-name 192.168.1.102:~
Enter fullscreen mode Exit fullscreen mode

Connect using SSH to the remote computer:

$ ssh 192.168.1.102
Enter fullscreen mode Exit fullscreen mode

And load the archive as an image to your kind cluster:

$ kind load image-archive archive-file-name --name remote-cluster1
Enter fullscreen mode Exit fullscreen mode

Tools that simplify Kubernetes

For more information on tools that simplify work with Kubernetes, visit Red Hat's Developer Tools page. Please check out my next article, How to distribute workloads using Open Cluster Management. Feel free to comment below if you have questions. We welcome your feedback. Have a great day, and keep up the good work!

Top comments (0)