When we install Tracardi, we encourage our customers to use Kubernetes (k8s) for deployment. However, many are apprehensive about setting up a Kubernetes cluster with High Availability (HA), fearing the process is overly complex. This guide will demonstrate that it’s not as difficult as it sounds. By following these steps, you’ll have a fully functional and HA k3s cluster.
Node Architecture Overview
For a robust High Availability k3s cluster, we'll use the following node configuration:
Node Breakdown
-
Load Balancer Nodes:
- 2 nodes for load balancing and high availability
- IP Addresses:
- Load Balancer 1: 192.168.1.10
- Load Balancer 2: 192.168.1.11
- Virtual IP (VIP): 192.168.1.100
-
Control Plane Nodes:
- 3 nodes to manage cluster state and provide redundancy
- IP Addresses:
- Control Plane 1: 192.168.1.50
- Control Plane 2: 192.168.1.51
- Control Plane 3: 192.168.1.52
-
Worker Nodes:
- 2 nodes to run application workloads
- IP Addresses:
- Worker Node 1: 192.168.1.60
- Worker Node 2: 192.168.1.61
Prerequisites
Network Requirements
- Ensure all nodes can communicate with each other
- Static IP addresses recommended
- Firewall configured to allow necessary ports
- 6443 for Kubernetes API server
- 10250 for kubelet
- Other required k3s ports
System Preparation
- Update all nodes:
sudo apt update && sudo apt upgrade -y
- Verify network interfaces:
ip a
This command helps identify your network interface (e.g., eth0, ens33), we will need this for keepalived configuration.
Step 1: Configure Load Balancer Nodes
1.1 Install Keepalived and HAProxy
On both load balancer nodes, install the required packages:
sudo apt-get install haproxy keepalived -y
1.2 Keepalived Configuration
First, determine your network interface using ip a
. Let's assume it's eth0
.
On Load Balancer 1 (192.168.1.10):
sudo nano /etc/keepalived/keepalived.conf
global_defs {
enable_script_security
script_user root
}
vrrp_script chk_haproxy {
script 'killall -0 haproxy'
interval 2
}
vrrp_instance haproxy-vip {
interface eth0
state MASTER
priority 200
virtual_router_id 51
virtual_ipaddress {
192.168.1.100/24
}
track_script {
chk_haproxy
}
}
On Load Balancer 2 (192.168.1.11), change state MASTER
to state BACKUP
and priority 200
to priority 100
.
1.3 HAProxy Configuration
On both load balancer nodes:
sudo nano /etc/haproxy/haproxy.cfg
Append this to config file.
frontend k3s-frontend
bind *:6443
mode tcp
option tcplog
default_backend k3s-backend
backend k3s-backend
mode tcp
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s
server control-plane-1 192.168.1.50:6443 check
server control-plane-2 192.168.1.51:6443 check
server control-plane-3 192.168.1.52:6443 check
1.4 Start Services
sudo systemctl restart haproxy keepalived
sudo systemctl enable haproxy keepalived
Step 2: Install Control Plane Nodes
2.1 Generate Cluster Token
openssl rand -hex 16
Save this token for cluster authentication.
2.2 Initialize First Control Plane
On the first control plane node (192.168.1.50):
curl -sfL https://get.k3s.io | K3S_TOKEN=YOUR_SECRET sh -s - server \
--cluster-init \
--tls-san=192.168.1.100
2.3 Join Additional Control Plane Nodes
On the remaining control plane nodes (192.168.1.51 and 192.168.1.52):
curl -sfL https://get.k3s.io | K3S_TOKEN=YOUR_SECRET sh -s - server \
--server https://192.168.1.100:6443 \
--tls-san=192.168.1.100
Step 3: Install Worker Nodes
On both worker nodes (192.168.1.60 and 192.168.1.61):
curl -sfL https://get.k3s.io | K3S_TOKEN=YOUR_SECRET sh -s - agent \
--server https://192.168.1.100:6443
Step 4: Verify Cluster
4.1 Check Cluster Status
On any control plane node:
kubectl get nodes
You should see all nodes in the cluster, with control plane and worker nodes ready.
Troubleshooting
Common Issues
- Firewall Problems: Ensure all required ports are open
- Network Interface Mismatch: Double-check interface names in configurations
- Token Mismatch: Verify the same token is used across all nodes
Verification Checks
- Confirm VIP is floating between load balancers
- Verify HAProxy logs for backend server health
- Check k3s service status on each node
Best Practices
- Use static IP addresses
- Implement regular backups
- Monitor cluster health
- Keep k3s and system packages updated
Conclusion
By following these steps, you've created a resilient, highly available Kubernetes cluster using k3s. The load balancers provide a single entry point, while multiple control planes ensure cluster reliability.
Conclusion
Setting up a k3s cluster with High Availability is simpler than it seems. By using Keepalived for a VIP and k3s for lightweight Kubernetes, you can achieve a robust and reliable environment. This approach ensures that even if a control plane node fails, your cluster remains operational.
Top comments (0)