DEV Community

Cover image for Setting Up Kubernetes Cluster with K3S
Arthur
Arthur

Posted on

Setting Up Kubernetes Cluster with K3S

Deploying a high availability Kubernetes cluster using k3s with 3 masters and ETCD as the storage, the backend is a reliable way to ensure that your applications run seamlessly, even when one or more nodes fail. In this article, we will guide you through the process of setting up such a cluster with detailed examples.

Prerequisites

Before you start deploying a k3s cluster with high availability, you will need to prepare the following:

  • Three or more servers with 3 running any Linux distribution (these servers will be used as Kubernetes nodes).
  • A user account with sudo privileges on each server.
  • A working network connection between the servers.
  • A firewall is installed on each server, allowing traffic on ports 6443, 2379, 2380, and 8472.

NEW CLUSTER

To run K3s in this mode, you must have an odd number of server nodes. We recommend starting with three nodes. The odd number of nodes is to prevent a split brain. A split-brain is a state of a server cluster where nodes diverge from each other and have conflicts when handling incoming I/O operations. The servers may record the same data inconsistently or compete for resources. In a situation where there is an odd number, there will always be a majority vote, which can automatically be elected as the source of truth for the conflict.
To get started, first launch a server node with the cluster-init flag to enable clustering and a token that will be used as a shared secret to join additional servers to the cluster.

NOTE Replace the token with something secure and keep it safe from third-party access


curl -sfL https://get.k3s.io |K3S_TOKEN=123456789 sh -s - server --cluster-init


Enter fullscreen mode Exit fullscreen mode

After launching the first server, join the second and third servers to the cluster using the shared secret with the server IP address of the first node (the one used in the initial step), in this scenario, we will assume that IP is 10.51.10.10 :


curl -sfL https://get.k3s.io | K3S_TOKEN=123456789 sh -s - server --server https://10.51.10.10:6443

Enter fullscreen mode Exit fullscreen mode

Now you have a highly available control plane. Any successfully clustered servers can be used in the --server argument to join additional server and worker nodes. Joining additional worker nodes to the cluster follows the same procedure as a single server cluster.
There are a few config flags that must be the same in all server nodes (You don't have to worry about these if you use the above method as it automatically configures all these for you):

  • Network-related flags: --cluster-dns, --cluster-domain, --cluster-cidr, --service-cidr
  • Flags controlling the deployment of certain components: --disable-helm-controller, --disable-kube-proxy, --disable-network-policy and any component passed to --disable
  • Feature-related flags: --secrets-encryption

Cluster Access

The kubeconfig file is stored at /etc/rancher/k3s/k3s.yaml is used to configure access to the Kubernetes cluster. If you have installed upstream Kubernetes command line tools such as kubectl or helm you will need to configure them with the correct kubeconfig path. This can be done by either exporting the KUBECONFIG environment variable or invoking the --kubeconfig command line flag. Refer to the examples below for details.
The configuration file /etc/rancher/k3s/k3s.yaml can be found on any of the master servers in your cluster. Copy this file to your local machine and install kubectl on your local dev machine:
Check the official Kubernetes website for instructions on how to install kubectl on your operating system.
Leverage the KUBECONFIG environment variable:


export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

kubectl get pods --all-namespaces

Enter fullscreen mode Exit fullscreen mode

If you get a list of namespaces on your cluster, then you are good to go.
While this setup is sufficient to get you started, we also need to have a highly available storage service. K3S comes with hostPath storage, which is not ideal for HA workloads.

Longhorn

Longhorn delivers simplified, easy-to-deploy and upgrade, 100% open source, cloud-native persistent block storage without the cost overhead of open core or proprietary alternatives.
The Longhorn block storage can easily be added to your cluster with the following command


kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml

Enter fullscreen mode Exit fullscreen mode

Conclusion

You have now finally deployed an enterprise-grade Kubernetes cluster with k3s. You can now deploy some work on this cluster. Some components to take note of are for ingress, you already have Traefik installed, longhorn will handle storage and Containerd as the container runtime engine.

Top comments (0)