DEV Community

Cover image for Provisioning a Persistent EBS-backed Storage on Amazon EKS using Helm
Dickson Victor for AWS Community Builders

Posted on • Edited on

Provisioning a Persistent EBS-backed Storage on Amazon EKS using Helm

Deploying stateful applications on kubernetes can pose a lot of complexities. In this demo, we will deploy a postgres database to AWS Elastic Kubernetes Service(EKS) and configure its persistence on Amazon Elastic Block Store(EBS). We will be using Helm, a package manager, to make this process more efficient.

Pre-requisites

First, ensure that the following utilities are installed and properly configured on your machine.
AWS CLI
EKSCTL
HELM

1. Create an EKS cluster
You can use either the AWS management console or EKSCTL utility to create your kubernetes cluster, for convinience, we use eksctl.
Create a file "demo-cluster.yaml" and paste the following into it.



# demo-cluster.yaml
# A cluster with two managed nodegroups
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: demo-cluster
  region: us-west-1

managedNodeGroups:
  - name: managed-ng-1
    instanceType: t3.small
    minSize: 1
    maxSize: 2

  - name: managed-ng-2
    instanceType: t3.small
    minSize: 1
    maxSize: 2


Enter fullscreen mode Exit fullscreen mode

The file creates a kubernetes cluster named demo-cluster, with two managed nodegroups. To apply it, run;



eksctl create cluster -f demo-cluster.yaml


Enter fullscreen mode Exit fullscreen mode

After the cluster has finished provisioning, view the nodes with the command;



kubectl get nodes


Enter fullscreen mode Exit fullscreen mode

2. Create an IAM OIDC identity provider

  • Determine whether you have an existing IAM OIDC provider for your cluster.

Retrieve your cluster's OIDC provider ID and store it in a variable.



oidc_id=$(aws eks describe-cluster --name demo-cluster --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)


Enter fullscreen mode Exit fullscreen mode
  • Determine whether an IAM OIDC provider with your cluster's ID is already in your account. ```

aws iam list-open-id-connect-providers | grep $oidc_id

- Create an IAM OIDC identity provider for your cluster with the following command;
Enter fullscreen mode Exit fullscreen mode

eksctl utils associate-iam-oidc-provider --cluster demo-cluster --approve



**3. Configure a Kubernetes service account to assume an IAM role**

- Create an IAM role and associate it with a Kubernetes service account. You can use either eksctl or the AWS CLI. Here we used the AWS CLI. 
a. Create a Kubernetes service account. Copy and paste the following contents to your terminal.
Enter fullscreen mode Exit fullscreen mode

cat >my-service-account.yaml <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: ebs-csi-controller-sa
namespace: kube-system
EOF
kubectl apply -f my-service-account.yaml

b. Set your AWS account ID to an environment variable with the following command.
Enter fullscreen mode Exit fullscreen mode

account_id=$(aws sts get-caller-identity --query "Account" --output text)

c. Set the cluster's OIDC identity provider to an environment variable with the following command.
Enter fullscreen mode Exit fullscreen mode

oidc_provider=$(aws eks describe-cluster --name demo-cluster --region $AWS_REGION --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///")

d. Set variables for the namespace and name of the service account.
Enter fullscreen mode Exit fullscreen mode

export namespace=kube-system
export service_account=ebs-csi-controller-sa

e. Run the following command on your terminal to create a trust policy file for the IAM role. 
Enter fullscreen mode Exit fullscreen mode

cat >aws-ebs-csi-driver-trust-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::$account_id:oidc-provider/$oidc_provider"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"$oidc_provider:aud": "sts.amazonaws.com",
"$oidc_provider:sub": "system:serviceaccount:$namespace:$service_account"
}
}
}
]
}
EOF

f. Create the role "AmazonEKS_EBS_CSI_DriverRole", and my-role-description with a description for your role.

Enter fullscreen mode Exit fullscreen mode

aws iam create-role --role-name AmazonEKS_EBS_CSI_DriverRole --assume-role-policy-document file://aws-ebs-csi-driver-trust-policy.json --description "my-role-description"

g. Attach the required AWS managed policy to the role with the following command.

Enter fullscreen mode Exit fullscreen mode

aws iam attach-role-policy \
--policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
--role-name AmazonEKS_EBS_CSI_DriverRole

h. Annotate your service account with the Amazon Resource Name (ARN) of the IAM role that you want the service account to assume.

Enter fullscreen mode Exit fullscreen mode

kubectl annotate serviceaccount -n $namespace $service_account eks.amazonaws.com/role-arn=arn:aws:iam::$account_id:role/AmazonEKS_EBS_CSI_DriverRole



##  4. Adding the Amazon EBS CSI add-on
To improve security and reduce the amount of work, you can manage the Amazon EBS CSI driver as an Amazon EKS add-on. You can use eksctl, the AWS Management Console, or the AWS CLI to add the Amazon EBS CSI add-on to your cluster. To add the Amazon EBS CSI add-on using the eksctl, run the following command. Remember to replace with your account ID.

Enter fullscreen mode Exit fullscreen mode

eksctl create addon --name aws-ebs-csi-driver --cluster demo-cluster --service-account-role-arn arn:aws:iam::$account_id:role/AmazonEKS_EBS_CSI_DriverRole --force


## 5. Update the worker nodes role
Attach the policy "AmazonEBSCSIDriverPolicy" to the two worker node's roles for the cluster and also the cluster's ServiceRole. 

## 6. Deploying postgres database with Helm
Helm is a Kubernetes deployment tool for automating creation, packaging, configuration, and deployment of applications and services to Kubernetes clusters. Kubernetes is a powerful container-orchestration system for application deployment.

## - Define storage class
You must define a storage class for your cluster to use and you should define a default storage class for your persistent volume claims.
To create an AWS storage class for your Amazon EKS cluster, create an AWS storage class manifest file for your storage class. The following storage-class.yaml example defines a storageclass named "aws-pg-sc" that uses the Amazon EBS gp2 volume type.

Enter fullscreen mode Exit fullscreen mode

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: aws-pg-sc
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
fsType: ext4

Use kubectl to create the storage class from the manifest file.

Enter fullscreen mode Exit fullscreen mode

kubectl create -f storage-class.yaml

Run the following to view the available storageclasses in your cluster, 

Enter fullscreen mode Exit fullscreen mode

kubectl get storageclass


## - Helm chart for postgresql
 In this demo, we will leverage the [helm chart for postgresql managed by bitnami](https://github.com/bitnami/charts/tree/main/bitnami/postgresql/#installing-the-chart). . We will be overwriting some values in values.yaml so that the chart uses the storageclass we provisioned earlier. Create a file "values-postgresdb.yaml" and paste the following into it.

Enter fullscreen mode Exit fullscreen mode

primary:
persistence:
storageClass: "aws-pg-sc"
auth:
username: postgres
password: demo-password
database: demo_database


## - Installing the Chart
To install the chart with the release name pgdb:

Enter fullscreen mode Exit fullscreen mode

helm repo add my-repo https://charts.bitnami.com/bitnami
helm install pgdb --values values-postgresdb.yaml my-repo/postgresql

After the database successfully deploys, check the PV, PVC and pod created respectively with the following commands, which should give similar outputs as the following respectively;

Enter fullscreen mode Exit fullscreen mode

$kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE

pvc-0e4020a4-8d43-4292-b30f-f57bbc4414bb 8Gi RWO Delete Bound default/data-pgdb-postgresql-0 aws-pg-sc 87s


Enter fullscreen mode Exit fullscreen mode

$kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE

data-pgdb-postgresql-0 Bound pvc-0e4020a4-8d43-4292-b30f-f57bbc4414bb 8Gi RWO aws-pg-sc 6h44m


Enter fullscreen mode Exit fullscreen mode

$kubectl get pods
NAME READY STATUS RESTARTS AGE

pgdb-postgresql-0 1/1 Running 0 16m

You can also verify that the persistent storage was provisioned by navigating to the AWS management console >> EC2 >> Elastic Block Store >> Volumes. The screenshot attached is the volume provisioned in my case.

![provisioned-volume](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vop8kz9l84eznd17kkps.PNG)

## 7. Cleaning up
To clean up and delete the kubernetes cluster we created earlier, run the following command;

Enter fullscreen mode Exit fullscreen mode

eksctl delete cluster -f demo-cluster.yaml

 If the above command doesn't delete the cluster due to the presence of the pod, navigate to cloudformation console and manually delete each cloudformation stack.

![Undeleted-stack](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w2loojookd2rvm960c39.jpg)

And that concludes the demo on provisioning a persistent EBS-backed storage on Amazon EKS using Helm. Feel free comment below with your feedbacks.
You can also watch the video demostration on [YouTube](https://youtu.be/3SSdbvH5EVo) 
Enter fullscreen mode Exit fullscreen mode

Top comments (0)