DEV Community

Nico Meisenzahl
Nico Meisenzahl

Posted on • Originally published at Medium

Azure Kubernetes Service — Next level persistent storage with Azure Disk CSI driver

When talking about persistent storage with Kubernetes Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) are our tools of choice. We can use them with Azure Disks and Azure Files for a long time now. But now it is time to bring them to the next level!

This post mainly focuses on Azure Disks (don’t miss to review the links below to get more details on Azure Files).

What’s new?

With Azure Disk Container Storage Interface (CSI) driver for Azure Disk, we are now getting some great features that help us to run our stateful services much smoother on Azure Kubernetes Service:

  • Zone Redundant Storage (ZRS) Azure Disks
  • ReadWriteMany with Azure Disks
  • Kubernetes-native volume snapshots
  • Volume resizing
  • Volume cloning

Azure Disk Container Storage Interface driver

Since Kubernetes 1.21 (on Azure Kubernetes Service) we can use the new Container Storage Interface implementation which is available for Azure Disk and Azure Files.

Wait, what is Container Storage Interface?

The Container Storage Interface (CSI) is an abstraction layer that allows third-party storage providers to write plugins exposing new storage systems in Kubernetes. CSI is generally available for some time now and replaces the previous volume plugin system. With CSI third-party providers can develop and maintain their plugins outside of the Kubernetes project lifecycle which brings them more flexibility. CSI is the future for any storage integration and is already used in many scenarios.

How to use the Azure Disk CSI driver with Azure Kubernetes Service?

You will need at least Kubernetes 1.21 or newer to be able to use the Azure Disk Container Storage Interface driver. Besides this you will also need to enable some Azure features gates to be able to use all of the above-mentioned features:

az feature register --name SsdZrsManagedDisks --namespace Microsoft.Compute
az feature register --name EnableAzureDiskFileCSIDriver --namespace Microsoft.ContainerService
az feature list -o table --query "[?contains(name, 'Microsoft.Compute/SsdZrsManagedDisks')].{Name:name,State:properties.state}"
az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/EnableAzureDiskFileCSIDriver')].{Name:name,State:properties.state}"
az provider register --namespace Microsoft.Compute
az provider register -n Microsoft.ContainerService
Enter fullscreen mode Exit fullscreen mode

You will then need to spin up an Azure Kubernetes Cluster with the CSI driver feature enabled:

az group create -g aks-1-22 -l westeurope
az aks create -n aks-1-22 \
  -g aks-1-22 \
  -l westeurope \
  -c 3 \
  -s Standard_B2ms \
  --node-zones 1 2 3 \
  --aks-custom-headers EnableAzureDiskFileCSIDriver=true
az aks get-credentials -n aks-1-22 -g aks-1-22
Enter fullscreen mode Exit fullscreen mode

Furthermore, you will need to create a StorageClass and VolumeSnapshotClass to use all mentioned features:

cat <<EOF | kubectl apply -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: azuredisk-csi-premium-zrs
provisioner: disk.csi.azure.com
parameters:
  skuname: Premium_ZRS
  maxShares: "3"
  cachingMode: None
allowVolumeExpansion: true
reclaimPolicy: Delete
volumeBindingMode: Immediate
---
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshotClass
metadata:
  name: azuredisk-csi-vsc
driver: disk.csi.azure.com
deletionPolicy: Delete
parameters:
  incremental: "true"
EOF
Enter fullscreen mode Exit fullscreen mode

The StorageClass is configured to use the new Premium_ZRS Azure Disks that allow to mount the Disk to multiple Pods across multiple Availability Zones.

The VolumeSnapshotClass allows us to use the Kubernetes-native volume snapshot feature.

We are now ready to create a PersistentVolumeClaim based on the above StorageClass:

cat <<EOF | kubectl apply -f -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-azuredisk-csi-premium-zrs
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 256Gi
  volumeMode: Block
  storageClassName: azuredisk-csi-premium-zrs
EOF
Enter fullscreen mode Exit fullscreen mode

Kubernetes-native volume snapshots

With the Azure Disk CSI driver, we are able to create volume snapshots. You will find more details on the volume snapshot feature itself here. Use the following code snippet to create a snapshot of the above create PVC:

cat <<EOF | kubectl apply -f -
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
  name: azuredisk-volume-snapshot
spec:
  volumeSnapshotClassName: azuredisk-csi-vsc
  source:
    persistentVolumeClaimName: pvc-azuredisk-csi-premium-zrs
EOF
Enter fullscreen mode Exit fullscreen mode

You can now verify the snapshot with the following kubectl command:

kubectl describe volumesnapshot azuredisk-volume-snapshot
Enter fullscreen mode Exit fullscreen mode

Volume cloning

Volume snapshots themselves are great but we can also use them to clone an existing PVC. For example to attach it to another workload. This can be done with the following manifest:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-azuredisk-snapshot-restored
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: azuredisk-csi-premium-zrs
  resources:
    requests:
      storage: 265Gi
  dataSource:
    name: azuredisk-volume-snapshot
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
EOF
Enter fullscreen mode Exit fullscreen mode

Once again you can review it with kubectl:

kubectl get pvc,pv
Enter fullscreen mode Exit fullscreen mode

Volume resizing

The CSI driver also supports resizing an existing PVC. Some important notes on this one:

  • The PVC needs to be unmounted for resizing
  • The PV will be resized directly, the PVC after mounting it

You can resize the above-cloned PVC via kubectl by executing the following command:

kubectl patch pvc pvc-azuredisk-snapshot-restored --type merge --patch '{"spec": {"resources": {"requests": {"storage": "512Gi"}}}}'
Enter fullscreen mode Exit fullscreen mode

ReadWriteMany across Availability Zones

As already mentioned above we are now also able to mount our PVC to multiple workloads via ReadWriteMany. With the new ZRS Azure Disks, we are even further able to use them across Availability Zones.

Let’s test these new features with the below Deployment manifest:

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
      name: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:latest
          volumeDevices:
            - name: azuredisk
              devicePath: /dev/sdx
      volumes:
        - name: azuredisk
          persistentVolumeClaim:
            claimName: pvc-azuredisk-csi-premium-zrs
EOF
Enter fullscreen mode Exit fullscreen mode

As you see above are spinning up a deployment with three replicas across our spread Kubernetes nodes mounting the same Azure Disk with read-write access. You can review it by executing the following kubectl command:

kubectl get pods,volumeattachments -owide
Enter fullscreen mode Exit fullscreen mode

That said, (as you may have already seen) ZRS only supports Block devices so far.

More details

Don’t miss to review the following links to get some more insights and further documentation:

Top comments (0)