DEV Community

Cover image for From NAS to Kubernetes: Setting Up a Raspberry Pi Cluster with k3s
vince
vince

Posted on • Updated on

From NAS to Kubernetes: Setting Up a Raspberry Pi Cluster with k3s

Intro

I have a Synology NAS DS920+ which performs excellently, and I love it. Over the years, I have added more and more applications that I self-host. I'm currently hosting:

  • Vaultwarden for password management
  • Gitea git repository for hosting my game projects that have larger file requirements
  • Obsidian LiveSync server
  • Jellyfin for movie/TV-show streaming
  • Jupyter Notebook
  • Immich for sharing photos with friends and family
  • Keycloak for identity and access management
  • Paperless-ngx for document management

As I added more and more applications, the memory footprint of these applications has started to increase to a point where I felt like I might want to offload some applications to a separate machine.

Since I plan on completing the Kubernetes Administrator Certification, I thought, why not run my own cluster where I can run these applications? This is why I thought a Raspberry Pi Kubernetes cluster would be an ideal fit.

The Planned Setup

network layout

In this setup, the Raspberry Pi Cluster handles computing tasks, while the NAS provides storage. Both are connected to an unmanaged network switch, which is connected to the home router. Since the Ds920+ has two Ethernet ports it can both be connected to the switch and be connected to the router.

The NAS serves as the gateway for internet traffic. If an application on the cluster needs internet access, the NAS forwards the necessary traffic to the cluster. Local traffic to the cluster is secured with HTTPS encryption.

Since the cluster and NAS communicate directly through the switch, their traffic doesn’t burden the rest of your network. For example, even if the cluster is using maximum network capacity to write data to the NAS, your streaming experience on other devices won’t be affected. The switch efficiently routes traffic directly between the cluster and NAS without involving your router. Same goes for the communication between the individual nodes of the cluster.

What You'll Need

  • 3x Raspberry Pis (I opted for the rpi 5 with 4gb ram)
  • 3x Raspberry Pi official power supplies
  • 4 port unmanaged network switch
  • A power strip
  • 4 short Ethernet cables
  • A Raspberry Pi cluster acrylic case or similar
  • Fans for the Raspberry Pis
  • SD cards: SanDisk Extreme PLUS 32GB, A1, U3, V30 or similar

Unfortunately we will need the power-supplies and won't be able to use multi port usb chargers to power the rasberry pis. This is because the usb-chargers cannot deliver the power reliably enough. Think about it. Usb chargers are for charging phones not powering computers, so therefore it's best to stick to the cumbersome power supplies.

Cluster Setup Guide

We‘ll setup the rasberry pis from scratch. We’ll install k3s, with a custom loadbalancing setup. k3s is a perfect fit since it is resource efficient and optimized for hardware like the rasberry pi. The k3s cluster will have a single masternode. A HA-cluster with k3s is theoretically possible but out of scope for this post.

Flashing the OS

I'm using the Raspberry Pi Imager found here: https://www.raspberrypi.com/software/

Download the application, and flash the os onto the sd cards one by one.

As OS, I use Raspberry Pi OS Lite (64-bit).

Set the hostname to rpi-node-#.local (or use whatever naming scheme you prefer). The hostname needs to be unique, so number the nodes accordingly.

Set the username, password, and enable SSH and pass your public key. This will simplify SSH access and make it more secure - win-win.

Once all SD cards are flashed, insert them into the RPIs and connect them all to power and internet.

Setting Up The Cluster

We’ll set up a K3s cluster with one master node and two workers using k3sup to simplify the process.

K3s comes with a preconfigured load balancer, ServiceLB, but it exposes node addresses, leading to multiple external IPs, which isn’t ideal. Instead, we’ll use MetalLB for a single external IP, providing real load balancing across nodes.

K3s also includes Traefik, a reverse proxy and load balancer, but we’ll install it later and configure it separateley.

Why two load balancers? MetalLB (Layer 4) distributes traffic across IPs, while Traefik (Layer 7) routes traffic based on hostnames (it can do more but for now let's assume this). Combining MetalLB and Traefik gives a powerful setup with one IP for requests, forwarding them to the appropriate services.

now install k3sup

brew install k3sup
Enter fullscreen mode Exit fullscreen mode

To set up a K3s cluster is pretty simple, but to ssh into all the machines is still annoying so I created a script that will do all of it for us. The script can be found here:
https://gist.github.com/VincentSchmid/1075eb6d06f2b56bd8e9efac5871e492

You can run the script like so:

./cluster-setup/k3s-setup.sh --rpi-ips "$RPI_1_IP $RPI_2_IP $RPI_3_IP" --rpi-user <username> --node-name-scheme "rpi-node-" --sleep-duration 60
Enter fullscreen mode Exit fullscreen mode

The $RPI_1_IP will be the master node and the other two will be the worker nodes. Also you can add more nodes if you like.

What the script does:

  • It will enable cgroup features on the kernel, required for running containerized applications
  • It will make sure that sudo can be run without a password. This is unfortunately needed for K3s to install.
  • It will set the hostnames of the Pis based on the ordering you set
  • It will reboot the Raspberry Pis
  • It will check if the operations were successful
  • It will install K3s server on the master node and then join the worker nodes to the cluster

After the script ran successfully, you should have a working cluster and the kubeconfig should be in the current working directory, which you can copy to ~/.kube/config/

try running:

kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

Setup Load Balancing and a Dashboard

Next, we're going to install MetalLB, Traefik, and Rancher, a cool Kubernetes management dashboard. You could also go for the Kubernetes dashboard: https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/

Below I'm going to explain in detail how to set up the following apps.

Cert Manager

Cert Manager helps you create certificates and rotate those certificates, but in this setup, we just need it for Rancher to work, and that could be a reason to simply use the Kubernetes dashboard instead of Rancher.

Here is how we install Cert Manager:

values.yaml:

crds:
  enabled: true
replicaCount: 2
extraArgs:
  - --dns01-recursive-nameservers=1.1.1.1:53,1.1.1.1:53
  - --dns01-recursive-nameservers-only=true
podDnsConfig:
  nameservers:
    - "1.1.1.1"
    - "9.9.9.9"
Enter fullscreen mode Exit fullscreen mode

Installation steps:

helm repo add jetstack https://charts.jetstack.io --force-update
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.15.2 -f values.yaml
Enter fullscreen mode Exit fullscreen mode

MetalLB

For MetalLB to work, it needs some free external IPs. You need to set a free IP range from your local network under spec.addresses:

metallb-config.yaml:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: pool
  namespace: metallb-system
spec:
  addresses:
  - 192.168.1.242-192.168.1.250 # set this range to a free range on your network
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: l2-advertisement
  namespace: metallb-system
spec:
  ipAddressPools:
  - pool
Enter fullscreen mode Exit fullscreen mode

To make sure these addresses will stay free, you can shrink your local network on your router. I reserved the top 14 IPs on my network for MetalLB, but really, until now I only needed one.

Run the following commands to install MetalLB:

kubectl create namespace metallb-system
helm repo add metallb https://metallb.github.io/metallb
helm install metallb metallb/metallb --namespace metallb-system
kubectl apply -f metallb-config.yml
Enter fullscreen mode Exit fullscreen mode

Traefik

Now I'm going to configure Traefik in a way that it can create certificates itself for ingresses. This only works if you use a single Traefik replica, but it will simplify your ingress definitions down the line. You will just need to add an annotation to the ingress for it to work. This Traefik setup will automatically redirect to 443.

In the values.yaml, you will need to set the "metallb.universe.tf/loadBalancerIPs:" annotation to an IP that is within your free IP range you assigned to MetalLB before. This will be the IP Traefik will use as an external IP and which will listen to the requests.

values.yaml:

globalArguments:
  - "--global.sendanonymoususage=false"
  - "--global.checknewversion=false"
additionalArguments:
  - "--serversTransport.insecureSkipVerify=true"
  - "--log.level=DEBUG"
  - "--certificatesresolvers.default.acme.tlschallenge=true"
  - "--certificatesresolvers.default.acme.email=your-email@example.com"
  - "--certificatesresolvers.default.acme.storage=/data/acme.json"
  - "--certificatesresolvers.default.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory"
deployment:
  enabled: true
  replicas: 1
  annotations: {}
  podAnnotations: {}
  additionalContainers: []
  initContainers: []
ports:
  web:
    redirectTo:
      port: websecure
  websecure:
    port: 443
    tls:
      enabled: true
      certResolver: default
ingressRoute:
  dashboard:
    enabled: false
providers:
  kubernetesCRD:
    enabled: true
    ingressClass: traefik-external
    allowExternalNameServices: false
    allowCrossNamespace: true
  kubernetesIngress:
    enabled: true
    allowExternalNameServices: false
publishedService:
  enabled: false
rbac:
  enabled: true
service:
  enabled: true
  type: LoadBalancer
  annotations:
    metallb.universe.tf/loadBalancerIPs: 192.168.1.242  # one of the MetalLB address DHCP range assigned
resources:
  requests:
    cpu: "100m"
    memory: "50Mi"
  limits:
    cpu: "300m"
    memory: "150Mi"
persistence:
  enabled: true
  name: data
  accessMode: ReadWriteOnce
  size: 128Mi
  path: /data
  storageClass: "local-path"
Enter fullscreen mode Exit fullscreen mode

Now run:

helm repo add traefik https://traefik.github.io/charts
kubectl create namespace traefik
helm install --namespace=traefik traefik traefik/traefik -f values.yaml
Enter fullscreen mode Exit fullscreen mode

Rancher

First, set a rancher password:

export RANCHER_PASSWORD=<strong-password>
Enter fullscreen mode Exit fullscreen mode

Now run:

helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
kubectl create namespace cattle-system
helm install rancher rancher-latest/rancher --namespace cattle-system --set hostname=rancher.cluster.local --set bootstrapPassword=$RANCHER_PASSWORD
Enter fullscreen mode Exit fullscreen mode

Now the Traefik load balancer will listen on the IP specified for requests with the Rancher hostname: rancher.cluster.local. For each service you expose with a hostname, you will need to add a DNS entry in your router or on your local machine's host file. For example, rancher.cluster.local points to the Traefik load balancer IP. Now if you make the request rancher.cluster.local, it will upgrade the request to HTTPS, and Traefik will forward it to the service.

Log in to Rancher using the password you specified.

Congratulations, now you've got a nice starting point and can start deploying applications!

Next Steps

Now that you've got a cluster it's time to think about storage. I'm using the smb csi driver to mount my nas to my containers using storage classes. I've also setup argocd and deploy my applications using gitops.

Top comments (0)