Dealing with stateful set
in k8s is one of the most challenges specially Dealing with stateful sets in Kubernetes, particularly when scaling persistence volume, presents several challenges. It's crucial to consider factors such as data maintenance, cost management, minimizing downtime, and establishing effective monitoring for the future.
Scaling Disk Size in a StatefulSet
Let's streamline the process of increasing the size of a Neo4j StatefulSet in an EKS cluster:
Note: The following steps will result in downtime, so ensure your business can accommodate this.
- Set/Ensure allowVolumeExpansion: true:
- Edit your Storage class using the command:
kubectl edit storageClass gp2
- Add or confirm the presence of allowVolumeExpansion: true:
provisioner: kubernetes.io/aws-ebs
......
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
- Delete your StatefulSet (STS):
kubectl delete sts --cascade=orphan neo4j-cluster
- Edit your Persistent Volume Claim (PVC):
kubectl edit pvc data-neo4j-cluster-0
- Modify the
spec
to the desired size.
- Update your Helm Chart:
- Adjust your Helm chart values.yaml to reflect the changes:
volumes:
data:
mode: defaultStorageClass
defaultStorageClass:
accessModes:
- ReadWriteOnce
requests:
storage: 50Gi
- Upgrade your Helm Chart:
helm upgrade --install neo4j-cluster neo4j/neo4j --namespace neo4j --values values.yaml --version v5.15.0
Automation
I always look for the possibilities to automate the code so you can take all these steps and write a simple bash script which can do the job for you.
#!/usr/bin/env bash
kubectl delete sts STATEFULSET_NAME
kubectl patch pvc PVC_NAME -p '{"spec": {"resources": {"requests": {"storage": "50Gi"}}}}'
helm upgrade --install neo4j-cluster neo4j/neo4j --namespace neo4j --values values.yaml --version v5.15.0
kubectl get pvc
For further insights or any questions, connect with me on:
Top comments (0)