For a lot of people, 2020 was a rough year would be an understatement. I was fortunate enough to come across a video and some mentorship which motivated me to try and come out of lockdown better than I was before.
I had decided that it was time to pick up a new skill. At the time there was a goal at work to transition our infrastructure from using AWS Elastic Container Service (ECS) to an internal managed Kubernetes-based platform. Having only dabbled in Kubernetes I wanted to know more. I set out to build a home-lab using Kubernetes (k3s). It took me about 6 months to reach a point where I was comfortable with what I had built.
I started out the process by thinking about what I wanted to use the home lab for. I was originally inspired by a blog post I had read about setting up a raspberry pi home lab which set up DNS, OpenLDAP, and NextCloud. I knew similarly that I wanted to have a centralized system for identity management and a place to store files. As a developer who sometimes works on personal projects, I also wanted a local network playground for any personal services I build.
I spent about 2 months pouring over Helm charts and experimenting with Kubernetes for Docker Desktop using my Windows laptop to get as close to an ideal configuration as I could. I ran out of resources and eventually created a 3-node cluster using embedded devices.
I ultimately came up with the following design:
The architecture utilizes K3S as a lightweight alternative to Kubernetes. Due to the decision to leverage low-cost embedded devices, there was an inherit limitation on resources which would be available. A kubernetes distribution which utilized resources efficient would be paramount. Other factors which lead into the selection of K3S is the distribution provided the following features out-of-the-box:
nfs-client-provisioner implements portable networks storage across the Kubernetes cluster using the Network File System (NFS) Protocol. NFS allows for folders to be mounted via TCP protocols as network file shares. nfs-client-provisioner takes things a step further by auto-provisioning NFS volumes whenever a Kubernetes Persistent Volume Claim (PVC) is declared. The PVC will be designated as an NFS client if the
nfs storage class is used.
The idea here is that I wanted the Docker containers running in the Kubernetes Pods to write directly to a USB Drive which is mounted onto the Kubernetes master node. nfs-client-provisioner creates a sub-folder in the NFS root with the name of the PVC. Deleted PVCs are designated
By default K3S utilizes traefik as an ingress controller for the Kubernetes cluster. Having used Traefik before, I found setting up a traffic intercept using a project like Oauth-proxy not that straightforward with Traefik. I chose Nginx due to its widespread support, ease of configuration through annotations, its ability to provide proxy for TCP and UDP services, and being able to set up an authentication proxy using Kubernetes annotations on ingress objects.
Oauth2-proxy is a project which provides an authentication and authorization intercept to services where you want to protect pages. Oauth2-proxy leverages the OAUTH2 protocol to delegate authentication. Oauth2-proxy also allows custom configuration for identity management services such as Keycloak.
Having experienced the issue of identity management in past projects (I've literally published 5 web apps which became some iteration of user-profile applications), I found Keycloak to be of particular use when it comes to managing users and federating accounts. Keycloak is an open-sourced enterprise service which manages identity, authentication, authorization, and account federation which is part of the JBoss project and backed by RedHat. Since I wanted to use the Keycloak helm chart the Keycloak service runs using a Postgres backend.
Nobody likes a data breach. One of the most common ways sensitive application secrets can be exposed is through use of environment variables, or through data templates. Hashicorp produced an awesome service called Vault which allows you to managed and protect application secrets. Vault is easier to adapt to Kubernetes when using the Banzaicloud Vault Operator. Vault Operator enables the creation of a multi-tenant vault service as well as injection of secrets into Kubernetes Pods using annotations and a service web hook.
NextCloud and Keycloak use Vault Operator to inject a randomly generated database password so that each service could connect to its backend database. Oauth2-proxy containers utilize Vault Operator in order to inject Keycloak client IDs and client secrets into the Ouath2-proxy containers.
Along similar lines to the application security discussed in the previous section, unsecure connections are another particular pitfall for sensitive data leak. In Kubernetes, certificates can be managed and generated dynamically using the certificate-manager service.
The home cluster utilizes a self-signed ECDSA certificate to serve as the root certificate. Certificate manager dynamically generates certificates from the root certificate when pods or ingresses request a certificate upon creation.
The first level of monitoring in the home cluster occurs inside of the service mesh. A service mesh is a reliable service which proxies traffic internally inside of the Kubernetes cluster to provide mTLS security, monitoring, and additional reliability. While Istio is the most popular Kubernetes service mesh, LinkerD was selected due to its ease of installation as well as its ease of use.
Hand-in-hand with the service mesh is the Logging Operator. Logging Operator provides real-time log streaming using FluentD and FluentBit to stream log events to a 3rd party service (Grafana Loki, ELK, etc.). FluentD/FluentBit is injected to a monitored service via side-car injection where the container logs are captured by the fluent agent and streamed to the configured destination service.
Due to the resource constraints of running on embedded devices, special considerations had to be taken into account for hosting my personal projects. I wanted as little resource overhead as I could possibly require. In the cloud world the term serverless exists for which developers can be concerned about the software they developed, but don't have to consciously consider the infrastructure that it runs on. Services are composed of multiple single-purpose functions which execute and terminate on-demand.
A similar functionality exists for Kubernetes clusters. Originally I had investigated Kubeless as a platform for running serverless applications; however, I ultimately decided on OpenFaaS due to the availability and community support.
Though not in current use, I wanted to be able to index data in my personal projects as a method of allowing data to be searched. I've worked with ElasticSearch and it is my preferred index. Fortunately Elastic also provides Elastic Cloud for Kubernetes (ECK).
ECK enables the deployment of a multi-tenant elastic environment which can be fully expressed using Kubernetes Custom Resource Definitions (CRDs). ECK is fairly easy to set up and can pass all configuration options available to any of the Elastic services (Elasticsearch, Kibana, Beats) directly to the Kubernetes pods.
I published the cluster set-up into an Ansible playbook on GitHub
- NFS client provisioner - https://artifacthub.io/packages/helm/rimusz/nfs-client-provisioner
- Keycloak Helm Chart - https://github.com/codecentric/helm-charts/tree/master/charts/keycloak
- NextCloud Helm Chart - https://nextcloud.github.io/helm/
- Vault Operator Helm Chart - https://github.com/banzaicloud/bank-vaults
- Logging Operator Helm Chart - https://github.com/banzaicloud/logging-operator
- MariaDB Helm Chart - https://github.com/bitnami/charts/tree/master/bitnami/mariadb
- Postgres Helm Chart - https://github.com/bitnami/charts/tree/master/bitnami/postgresql
- ECK User Guide - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html
- OpenFaaS Helm Chart - https://github.com/openfaas/faas-netes
- Oauth2-proxy Helm chart - https://artifacthub.io/packages/helm/oauth2-proxy/oauth2-proxy
- Oauth2-proxy - https://oauth2-proxy.github.io/oauth2-proxy/
- cert-manager - https://artifacthub.io/packages/helm/cert-manager/cert-manager
- Kubeless - https://kubeless.io
- OwnCloud Helm Chart - https://github.com/bitnami/charts/tree/master/bitnami/owncloud
- k3sup documentation - https://github.com/alexellis/k3sup
- ingress-nginx documentation - https://kubernetes.github.io/ingress-nginx/deploy/#using-helm
So there it is! I've finally got a reliable application host set up using clustered container orchestration. Let me know if there's any questions, or if there's a particular area where I could go further in detail.