DEV Community

santisbon
santisbon

Posted on • Updated on

Cloud native homelab, Part 2: Kubernetes + Ceph

In Part 1 of this series we went over the hardware you'll need to get started on your cloud native homelab, where hopefully I've convinced you to choose Raspberry Pi boards and a cooling solution.

For the best performance we'll use SSDs for the boot partition as well as the storage cluster. This will give us a huge advantage over microSD cards in both speed and reliability.

In this example we'll use:

  • 2 Raspberry Pi 4B boards. I used an 8GB model for the control plane node and a 4GB model for a worker node.
  • 2 SATA SSDs with USB adapters (ASMedia chipset).
  • 1 microSD card to flash the USB bootloader to the Pis.
  • Cluster case with fans.

Here's what my homelab looks like. It also has a Raspberry Pi 2 that serves as a retro gaming console but for this post we'll be focusing on the Kubernetes cluster.

Image description

Image description

Our cluster will have a lightweight distribution of both Kubernetes and Ceph, a unified storage service with block, file, and object interfaces.

I've documented how to set up a Raspberry Pi and SSD in an automated, cloud native way in my notes here:

Raspberry Pi

Follow those instructions to:

  1. Set up your Pi to boot from SSD.
  2. Get the OS image (Ubuntu for Raspberry Pi).
  3. Following the instructions on the setup section, flash the image on the SSD.
  4. Verify that the automatic setup worked correctly. Do not continue without first fixing any errors found.

Repeat the process for each Raspberry Pi you have. Once you're done you'll have your Pi devices with MicroK8s installed, all software updated, and relevant kernel modules added.

Now it's time to configure our MicroK8s and MicroCeph cluster. I documented the process in my notes here:

MicroK8s

Follow those instructions to set up:

  1. MicroCeph cluster.
  2. MicroK8s cluster.
  3. Connect MicroK8s to the external MicroCeph cluster.
  4. (Optional) MicroK8s dashboard.

🎉 You're done! 🍾

Here's what a Ceph cluster looks like with 2 nodes and 3 (virtual) disks on each node/SSD:

Cluster status

Come back soon for part 3 where we'll deploy some workloads to our cluster.

Top comments (0)