DEV Community

Cover image for Kubernetes volumes upside-down with Discoblocks - #2
Richard Kovacs
Richard Kovacs

Posted on • Updated on

Kubernetes volumes upside-down with Discoblocks - #2

This blog post is the second part of my series about Discoblocks. If you haven't read the previous episode, please do it before you continue.

So Discoblocks is one of the open-source projects I'm working on, and our new pre-release build brings cool features I would like to write about.

v0.0.5 (aka Ibiza Disco) has been released

Release notes: https://github.com/ondat/discoblocks/releases/tag/v0.0.5

  • WebAssembly support for CSI driver integration
  • Ondat CSI driver integration
  • Horizontal autoscaling of volumes

WebAssembly support for CSI driver integration

In the new build of Discoblocks we have replaced in-tree CSI driver integration with WASI modules. If there is a missing driver for your use case, just implement a small interface, compile your driver to WASI module and mount it into the container (sub-directory at /drivers). Discoblocks starts using it once you enabled the new driver in configuration.

Here is a simple example

package main

import (
    "fmt"
    "os"

    "github.com/valyala/fastjson"
)

func main() {}

//export IsStorageClassValid
func IsStorageClassValid() {
    json := []byte(os.Getenv("STORAGE_CLASS_JSON"))

    if !fastjson.Exists(json, "allowVolumeExpansion") || !fastjson.GetBool(json, "allowVolumeExpansion") {
        fmt.Fprint(os.Stderr, "only allowVolumeExpansion true is supported")
        fmt.Fprint(os.Stdout, false)
        return
    }

    fmt.Fprint(os.Stdout, true)
}

//export GetPVCStub
func GetPVCStub() {
    fmt.Fprintf(os.Stdout, `{
    "apiVersion": "v1",
    "kind": "PersistentVolumeClaim",
    "metadata": {
        "name": "%s",
        "namespace": "%s"
    },
    "spec": {
        "storageClassName": "%s"
    }
}`,
        os.Getenv("PVC_NAME"), os.Getenv("PVC_NAMESACE"), os.Getenv("STORAGE_CLASS_NAME"))
}

//export GetCSIDriverNamespace
func GetCSIDriverNamespace() {
    fmt.Fprint(os.Stdout, "storageos")
}

//export GetCSIDriverPodLabels
func GetCSIDriverPodLabels() {
    fmt.Fprint(os.Stdout, `{ "app": "storageos", "app.kubernetes.io/component": "csi" }`)
}

//export GetMountCommand
func GetMountCommand() {
    fmt.Fprint(os.Stdout, `DEV=$(chroot /host ls /var/lib/storageos/volumes/ -Atr | tail -1) &&
chroot /host nsenter --target 1 --mount mkdir -p /var/lib/kubelet/plugins/kubernetes.io/csi/pv/${PVC_NAME} &&
chroot /host nsenter --target 1 --mount mount /var/lib/storageos/volumes/${DEV} /var/lib/kubelet/plugins/kubernetes.io/csi/pv/${PVC_NAME} &&
DEV_MAJOR=$(chroot /host nsenter --target 1 --mount cat /proc/self/mountinfo | grep ${DEV} | awk '{print $3}'  | awk '{split($0,a,":"); print a[1]}') &&
DEV_MINOR=$(chroot /host nsenter --target 1 --mount cat /proc/self/mountinfo | grep ${DEV} | awk '{print $3}'  | awk '{split($0,a,":"); print a[2]}') &&
for CONTAINER_ID in ${CONTAINER_IDS}; do
    PID=$(docker inspect -f '{{.State.Pid}}' ${CONTAINER_ID} || crictl inspect --output go-template --template '{{.info.pid}}' ${CONTAINER_ID}) &&
    chroot /host nsenter --target ${PID} --mount mkdir -p ${DEV} ${MOUNT_POINT} &&
    chroot /host nsenter --target ${PID} --mount mknod ${DEV}/mount b ${DEV_MAJOR} ${DEV_MINOR} &&
    chroot /host nsenter --target ${PID} --mount mount ${DEV}/mount ${MOUNT_POINT}
done`)
}

//export GetResizeCommand
func GetResizeCommand() {}

//export WaitForVolumeAttachmentMeta
func WaitForVolumeAttachmentMeta() {}
Enter fullscreen mode Exit fullscreen mode

That's all you need to bring your own driver.

Ondat CSI driver integration

As you saw in the previous example, we have a driver for Ondat (former Storageos) next to AWS EBS CSI support. This driver is for demo purposes only - so please don't use it in production -, but should be a great choice for testing the system.
All you need to do is execute the following commands:

kind create cluster --image=storageos/kind-node:v1.24.2
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml
kubectl storageos install --include-etcd --etcd-replicas 1 --stos-version v2.9.0-beta.1
kubectl apply -f https://github.com/ondat/discoblocks/releases/download/v0.0.5/discoblocks_v0.0.5.yaml
Enter fullscreen mode Exit fullscreen mode

Once provision has finished, you should create your first workload:

kubectl apply -f https://github.com/ondat/discoblocks/raw/7b72c8d87aa5d87a801e1b2e11fa98389f70f485/config/samples/discoblocks.ondat.io_v1_diskconfig-csi.storageos.com.yaml
kubectl apply -f https://github.com/ondat/discoblocks/releases/download/v0.0.5/core_v1_pod.yaml
Enter fullscreen mode Exit fullscreen mode

Test end result

kubectl exec $(kubectl get po --no-headers | tail -1 | awk '{print $1}') -- df -h | grep discoblocks/sample
Enter fullscreen mode Exit fullscreen mode

973.4M 24.0K 906.2M 0% /media/discoblocks/sample-0

It is time to register cluster at our Portal to enjoy your FREE TIER!

Horizontal autoscaling of volumes

One of the most exciting features is horizontal autoscaling. In the previous version, only vertical autoscaling was implemented. Discoblocks actively monitors the created volumes. Once the volume hits the threshold, Discoblocks increases the size of the volume. But in the new version, if the volume is not scalable vertically (all disk has an end capacity), Discoblocks creates a new disk and mounts it into the running pod.

Yes, your read it right.

Discoblocks ...

  1. creates a new PersistentVolumeClaim
  2. set the owner of the new PVC to patient zero (first PVC created for pod), this should be handy when you have to delete PVCs
  3. creates a VolumeAttachment to bind volume to the target node
  4. spins up a management job to format, mount the volume (the weird GetMountCommand in the driver)

Generate data:

kubectl exec $(kubectl get po --no-headers | tail -1 | awk '{print $1}') -- dd if=/dev/zero of=/media/discoblocks/sample-0/data count=1000000
sleep 30
kubectl exec $(kubectl get po --no-headers | tail -1 | awk '{print $1}') -- dd if=/dev/zero of=/media/discoblocks/sample-0/data count=2000000
sleep 60
kubectl exec $(kubectl get po --no-headers | tail -1 | awk '{print $1}') -- dd if=/dev/zero of=/media/discoblocks/sample-1/data count=1000000
sleep 30
kubectl exec $(kubectl get po --no-headers | tail -1 | awk '{print $1}') -- dd if=/dev/zero of=/media/discoblocks/sample-1/data count=2000000
sleep 60
Enter fullscreen mode Exit fullscreen mode

Test end result

If 🤞 everything has worked perfectly, you would see all 3 volumes mounted into the pod. 🎉

kubectl exec $(kubectl get po --no-headers | tail -1 | awk '{print $1}') -- df -h | grep discoblocks/sample
Enter fullscreen mode Exit fullscreen mode

1.9G 976.6M 896.1M 52% /media/discoblocks/sample-0
1.9G 976.6M 896.1M 52% /media/discoblocks/sample-1
973.4M 24.0K 906.2M 0% /media/discoblocks/sample-2

Please breath, slowly in, ..., slowly out across the nose and don't hesitate to give it a try :D

I let you figure out how awesome are these features. Please feel free to share your ideas, join the development, or simply enjoy the product.

Top comments (0)