Welcome to an exhilarating journey where we unlock the secrets of building a scalable infrastructure using Kubernetes. In this comprehensive guide, we'll navigate the nuances of setting up a robust cluster to host a WordPress site. Buckle up as we explore the implementation process using five Raspberry Pis, a router, and a switch, with the lightweight K3s as our chosen Kubernetes distribution.
Setting the Stage: Objectives
Our primary goal is crystal clear – to create a powerhouse infrastructure that effortlessly scales to host a WordPress site. This guide will walk you through every twist and turn, from the initial Raspberry Pi setup to deploying and scaling your WordPress application.
Before diving into the implementation, ensure you have the following:
-
Hardware:
- Five Raspberry Pis(You can follow along with one Raspberry PI A similar setup can be found in this youtube video)
- Router and Switch
- SD Cards for Raspberry Pis
- Ethernet Cables
- Power Supply for Raspberry Pis
- External Drive (for storing disk images)
-
Software:
- Win32 Disk Imager (for disk imaging)
- K3s binary for ARM architecture (compatible with Raspberry Pi)
- Raspberry Pi OS Lite 64-bit
-
Knowledge:
- Basic understanding of Linux environments
- Basic understanding of Kubernetes with essential commands
- Fundamental networking knowledge
Table of Contents
1. Setting Up Raspberry Pis
1.1 Hardware Preparation
Before we dive into the technical details, let's ensure that we have everything ready. Power up your Raspberry Pis and ensure they are properly connected to the network.
1.2 Operating System Installation
The first step is to get the Raspberry Pi OS up and running. To do this, follow these steps:
- Download the Raspberry Pi Imager and install it on your computer.
- Insert the SD card into your computer using an SD card reader.
- Open the Raspberry Pi Imager and choose the Raspberry Pi OS Lite 64-bit version.
- Select the inserted SD card as the storage location.
- Click on "Write" to start the installation process.
Once the process is complete, eject the SD card safely and insert it into the Raspberry Pi.
1.3 Configuring Raspberry Pi Networking
Now, let's configure the network settings on each Raspberry Pi:
-
Boot up the Raspberry Pi:
- Connect a monitor, keyboard, and mouse to the Raspberry Pi.
- Power it up and follow the on-screen instructions to set up the Raspberry Pi OS.
-
Open the Terminal:
- Once the Raspberry Pi OS is booted, open the terminal.
-
Configure Network Settings:
- Use the following command to edit the network configuration file:
sudo nano /etc/network/interfaces
-
Update the file with your desired network settings. For example:
auto eth0 iface eth0 inet static address 192.168.1.2 netmask 255.255.255.0 gateway 192.168.1.1
-
Save and Exit:
- Save the changes by pressing
Ctrl + X
, thenY
to confirm, and finallyEnter
to exit.
- Save the changes by pressing
-
Restart the Network:
- Restart the network interface to apply the changes:
sudo systemctl restart networking
1.4 Enabling SSH Access
Enabling SSH on each Raspberry Pi is essential for remote access. Here's how you can do it:
-
Open the Raspberry Pi Configuration:
- Run the following command:
sudo raspi-config
- Navigate to
Interfacing Options
and enableSSH
.
-
Restart the Raspberry Pi:
- After enabling SSH, restart the Raspberry Pi to apply the changes:
sudo reboot
With these steps completed, you've successfully set up the Raspberry Pis for further configuration.
2. Configuring the Master Node
2.1 Installing K3s on the Master Node
Now, let's move on to configuring the master node. Follow these steps:
-
Log into the Master Pi:
- Open a terminal or use SSH to log into the master Raspberry Pi.
-
Download and Install K3s:
- Use the following command to download and install K3s:
curl -sfL https://get.k3s.io | sh -
-
Verify K3s Installation:
- Once the installation is
complete, verify that K3s is running:
```bash
sudo k3s kubectl get nodes
```
During installation, I got this error
Kubernetes requires “cgroup_memory=1 cgroup_enabled=memory” added to the cmdline.txt file of my pi’s for the installation to work. As this is missing the installation initially failed.
After adding this, I was able to install Kubernetes on the master node

2.2 Configuring Master Node Components
With K3s installed, configure the master node components:
-
Export Kubeconfig:
- Export the Kubeconfig file to enable kubectl commands:
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
-
Check Nodes:
- Ensure that the master node is ready:
sudo k3s kubectl get nodes
-
Install Helm:
- Helm is a package manager for Kubernetes. Install Helm using:
sudo snap install helm --classic
-
Initialize Helm:
helm init
-
Verify the Helm installation:
helm version
With the master node configured, you're ready to proceed to the next stage.
3. Setting Up Worker Nodes
3.1 Installing K3s on Worker Nodes
Expand your cluster by setting up worker nodes. Here's how:
-
Log into Each Worker Pi:
- Use SSH to log into each worker Raspberry Pi.
-
Download and Install K3s:
- Similar to the master node, install K3s on each worker node:
curl -sfL https://get.k3s.io | K3S_URL=https://<master-node-ip>:6443 K3S_TOKEN=<node-token> sh -
-
Replace
<master-node-ip>
with the IP of your master node and<node-token>
with the token obtained from the master node:
sudo cat /var/lib/rancher/k3s/server/node-token
3.2 Joining Worker Nodes to the Cluster
After installing K3s on each worker node, join them to the cluster:
-
Obtain Master Node Token:
- On the master node, retrieve the token:
sudo cat /var/lib/rancher/k3s/server/node-token
-
Join Worker to Cluster:
- On each worker node, join the cluster using the token:
curl -sfL https://get.k3s.io | K3S_URL=https://<master-node-ip>:6443 K3S_TOKEN=<node-token> sh -
3.3 Verifying Worker Node Status
Check if the worker nodes have successfully joined the cluster:
-
On the Master Node:
- Run the following command to verify the status of worker nodes:
sudo k3s kubectl get nodes
- Ensure all nodes are in the 'Ready' state.
With the worker nodes in place, your Kubernetes cluster is shaping up. Now, let's delve into network configuration.
4. Network Configuration
4.1 Configuring Raspberry Pi Networking
A well-configured network is crucial for the seamless operation of your Kubernetes cluster. Follow these steps to ensure optimal network settings:
-
Check Network Interfaces:
- Verify the available network interfaces on each Raspberry Pi:
ip a
- Identify the primary network interface (e.g., eth0) for configuration.
-
Edit Network Configuration File:
- Open the network configuration file for editing:
sudo nano /etc/network/interfaces
-
Configure Static IP Address:
- Add the following lines to set a static IP address (adjust values based on your network):
auto eth0 iface eth0 inet static address 192.168.1.2 netmask 255.255.255.0 gateway 192.168.1.1
- Save and exit the editor (press
Ctrl + X
, thenY
, andEnter
).
-
Restart Networking:
- Apply the changes by restarting the network service:
sudo systemctl restart networking
This can also be set up on your router. Irrespective of how you set the Static IPs this step is important as a change of IP while working can prevent the nodes from communicating with each other.
The screenshot below shows a change in the status of my pods with this being the cause.
Using this command helped me to see the cause of this issue
sudo kubectl describe pod <pod_name>
4.2 Ensuring Network Connectivity
Verify that each Raspberry Pi can communicate with others in the network:
-
Ping Test:
- From one Raspberry Pi, ping another using their static IP addresses:
ping 192.168.1.3
- Replace the IP address with the actual address of the target Raspberry Pi.
-
SSH Connectivity:
- Ensure SSH connectivity between Raspberry Pis:
ssh pi@192.168.1.3
- Use the IP address of the target Raspberry Pi.
4.3 Troubleshooting Network Issues
If you encounter network issues, consider the following troubleshooting steps:
-
Check Configuration Files:
- Review the network configuration files on each Raspberry Pi.
-
Firewall Settings:
- Ensure that firewalls on Raspberry Pis are not blocking necessary ports.
-
Router Configuration:
- Check router settings to ensure it allows communication between devices.
-
Node Discovery:
- Verify that each node can discover others in the cluster.
With a well-configured network, your Kubernetes cluster is ready to conquer the world. In the next section, we'll explore persistent volume implementation.
5. Persistent Volume Implementation
To ensure data persistence and availability, we'll set up persistent volumes (PVs) in our Kubernetes cluster.
5.1 Creating Persistent Volumes
On the master node, create persistent volumes for data storage:
-
Create PV YAML File:
- Create a YAML file (e.g., pv.yaml) with the PV configuration. Here's an example for a local storage PV:
apiVersion: v1 kind: PersistentVolume metadata: name: pv-local spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data"
-
Apply the PV Configuration:
- Apply the configuration to create the PV:
sudo k3s kubectl apply -f pv.yaml
5.2 Configuring Persistent Volume Claims
Now, let's configure persistent volume claims (PVCs) for your WordPress deployment:
-
Create PVC YAML File:
- Create a YAML file (e.g., pvc.yaml) with the PVC configuration. Here's an example:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-local spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi
-
Apply the PVC Configuration:
- Apply the configuration to create the PVC:
sudo k3s kubectl apply -f pvc.yaml
5.3 Testing Persistent Volumes
Verify that persistent volumes are functioning as expected:
- Check PV Status: -
Verify the status of the persistent volume:
```bash
sudo k3s kubectl get pv
```
- Ensure that the PV is in the 'Bound' state.
-
Check PVC Status:
- Verify the status of the persistent volume claim:
sudo k3s kubectl get pvc
- Ensure that the PVC is in the 'Bound' state.
-
Test Data Persistence:
- Deploy a test pod that uses the persistent volume:
apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - name: test-container image: busybox command: ["/bin/sh", "-c", "echo Hello Kubernetes! > /mnt/data/test-file && sleep 3600"] volumeMounts: - name: storage mountPath: "/mnt/data" volumes: - name: storage persistentVolumeClaim: claimName: pvc-local
- Check if the test pod is running and verify the contents of the persistent volume.
With persistent volumes in place, your Kubernetes cluster is now equipped with reliable storage capabilities. The next step is to deploy WordPress on Kubernetes.
6. Deploying WordPress on Kubernetes
Let's dive into the exciting phase of deploying WordPress on your Kubernetes cluster.
6.1 Creating WordPress Deployment YAML
Create a YAML file (e.g., wordpress.yaml) to define the WordPress deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
spec:
replicas: 1
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: wordpress:latest
env:
- name: WORDPRESS_DB_HOST
value: mysql-service
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: password
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: wordpress-service
spec:
selector:
app: wordpress
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
Apply the configuration to create the WordPress deployment:
sudo k3s kubectl apply -f wordpress.yaml
6.2 Configuring Service for WordPress
Expose the WordPress service to the external world:
-
Check Service Status:
- Verify that the WordPress service is running:
sudo k3s kubectl get services
- Note the external IP assigned to the service.
-
Access WordPress:
- Open a web browser and navigate to the external IP of the WordPress service.
- Complete the WordPress installation steps.
6.3 Verifying WordPress Deployment
Confirm the successful deployment of WordPress:
-
Check Deployment Status:
- Verify the status of the WordPress deployment:
sudo k3s kubectl get deployments
- Ensure the desired number of replicas is running.
-
Verify Pods:
- Check the pods associated with the WordPress deployment:
sudo k3s kubectl get pods
- Ensure the pods are in the 'Running' state.
With WordPress up and running, it's time to explore how to scale your deployment for increased traffic.
7. Scaling the WordPress Deployment
Kubernetes makes scaling a breeze. Let's explore how to scale your WordPress deployment dynamically.
7.1 Horizontal Pod Autoscaling
Enable horizontal pod autoscaling for the WordPress deployment:
-
Create HPA YAML File:
- Create a YAML file (e.g., hpa.yaml) for the horizontal pod autoscaler:
apiVersion: autoscaling/v2beta2 kind: HorizontalPodAutoscaler metadata: name: wordpress-hpa spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: wordpress minReplicas: 1 maxReplicas: 5 metrics: - type: Resource resource: name: cpu targetAverageUtilization: 80
-
Apply the HPA Configuration:
- Apply the configuration to create the horizontal pod autoscaler:
sudo k3s kubectl apply -f hpa.yaml
7.2 Testing Autoscaling
Simulate increased load on your WordPress site to trigger autoscaling:
-
Generate Load:
- Use a tool like Apache Benchmark to simulate increased traffic:
ab -n 10000 -c 10 http://<wordpress-service-ip>/
- Replace
<wordpress-service-ip>
with the IP of your WordPress service.
-
Monitor Autoscaling:
- Check the status of the horizontal pod autoscaler:
sudo k3s kubectl get hpa
- Monitor the number of replicas based on the defined metrics.
7.3 Analyzing Scalability
Review metrics and logs to analyze the scalability of your WordPress deployment:
-
Check Metrics:
- Examine the metrics gathered by the horizontal pod autoscaler:
sudo k3s kubectl describe hpa wordpress-hpa
-
Review Logs:
- Analyze logs for individual pods to identify any performance issues:
sudo k3s kubectl logs <pod-name>
- Replace
<pod-name>
with the name of a WordPress pod.
With autoscaling in place, your WordPress deployment can dynamically adapt to varying workloads. Let's now focus on monitoring and observability.
8. Monitoring and Observability
A robust monitoring setup ensures that you stay informed about the health and performance of your Kubernetes cluster. Let's implement monitoring using Prometheus and visualize data with Grafana.
8.1 Implementing Prometheus for Monitoring
Deploy Prometheus to gather and store cluster metrics:
-
Create Prometheus YAML File:
- Create a YAML file (e.g., prometheus.yaml) for Prometheus deployment:
apiVersion: v1 kind: Service metadata: name: prometheus-service spec: selector: app: prometheus ports: - protocol: TCP port: 9090 targetPort: 9090 type: LoadBalancer --- apiVersion: apps/v1 kind: Deployment metadata: name: prometheus spec: replicas: 1 selector: matchLabels: app: prometheus template: metadata: labels: app: prometheus spec: containers: - name: prometheus image: prom/prometheus ports: - containerPort: 9090 args: - "--config.file=/etc/prometheus/prometheus.yml" volumeMounts: - name: prometheus-config mountPath: /etc/prometheus volumes: - name: prometheus-config configMap: name: prometheus-config
-
Apply the Prometheus Configuration:
- Apply the configuration to create the Prometheus deployment:
sudo k3s kubectl apply -f prometheus.yaml
8.2 Grafana Dashboard Configuration
Set up Grafana to visualize Prometheus metrics:
-
Create Grafana YAML File:
- Create a YAML file (e.g., grafana.yaml) for Grafana deployment:
apiVersion: v1 kind: Service metadata: name: grafana-service spec: selector: app: grafana ports: - protocol: TCP port: 3000 targetPort: 3000 type: LoadBalancer --- apiVersion: apps/v1 kind: Deployment metadata: name: grafana spec: replicas: 1 selector: matchLabels: app: grafana template: metadata: labels: app: grafana spec: containers: - name: grafana image: grafana/grafana ports: - containerPort: 3000 env: - name: GF_SECURITY_ADMIN_PASSWORD value: "admin" - name: GF_SECURITY_ADMIN_USER value: "admin" - name: GF_SECURITY_ALLOW_EMBEDDING value: "true"
-
Apply the Grafana Configuration:
- Apply the configuration to create the Grafana deployment:
sudo k3s kubectl apply -f grafana.yaml
-
Access Grafana:
- Access the Grafana dashboard using a web browser and the external IP of the Grafana service.
-
Configure Prometheus as a Data Source:
- Log in to Grafana (default credentials: admin/admin).
- Add Prometheus as a data source with the URL:
http://prometheus-service:9090
.
-
Import Kubernetes Dashboard:
- Import the official Kubernetes dashboard for Grafana.
With Prometheus and Grafana in place, your Kubernetes cluster is now equipped with powerful monitoring and visualization capabilities.
Conclusion
Congratulations on completing the implementation of a prototype Kubernetes-based cluster for scalable web-based WordPress deployment. This journey covered everything from setting up Raspberry Pis to deploying, scaling, and monitoring your WordPress application.
As you continue to explore the dynamic world of Kubernetes, remember that this guide serves as a solid foundation. Feel free to adapt and enhance your cluster based on evolving requirements. Embrace the scalability, flexibility, and resilience that Kubernetes brings to your web-based applications.
May your Kubernetes journey be filled with seamless deployments, effortless scalability, and a robust infrastructure that stands the test of time. Happy clustering!
Here are some resources I found helpful:
https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
https://docs.k3s.io/storage
YouTube playlist - https://youtube.com/playlist?list=PL9ti0-HuCzGbI4MdxgODTbuzEs0RS12c2&si=34d3StsvzwqnHc0j
Top comments (0)