DEV Community

Cover image for Streamlining Kubernetes Deployments with Helm Charts
Ivan Porta
Ivan Porta

Posted on • Originally published at Medium

Streamlining Kubernetes Deployments with Helm Charts

The rise of Kubernetes-as-a-service services has significantly simplified the setup and maintenance of Kubernetes clusters, leading to an increasing number of enterprises migrating their applications to Kubernetes. However, migrating complex applications to Kubernetes takes a lot of work. Traditional deployment methods typically involve managing numerous YAML files, each representing different components of an application, such as services, deployments, and volumes. This approach can be error-prone, mainly because it may require separate manifest files for each environment. For instance, development environments usually have fewer replicas or even less powerful machines compared to production environments to be more cost-effective, which leads to duplication and increases maintenance complexity. Additionally, manually handling updates and rollbacks of applications might necessitate keeping track of the deployed version and can become challenging with updates targeting different environments. This is where Helm comes in to help. It simplifies the Kubernetes application definition, deployment, and update by packaging an application’s stack into a singular, manageable unit called Chart.

In this article, we’ll dig into a comprehensive exploration of Helm Charts. In this article, we’ll view the differences between private and community charts, see how to manage dependencies with other charts, see the Chart’s folder structure, and finally show their effectiveness with a practical example.

What are Helm Charts?

Developed by Deis in 2016 and later acquired by Microsoft, Helm is a package manager for Kubernetes. In Helm, the ‘packages’ are referred to as ‘charts’; each contains all the necessary files describing a set of Kubernetes resources, such as deployments, services, ingress, etc, related to a particular component or application stack. The usage of Helm charts offers several key advantages:

  • Standardized Deployment Processes: Helm allows for deploying all application components in a declarative manner using a single command. This minimizes the risk of errors and significantly streamlines the deployment process.
  • Simplified Management of Complexity: Helm charts abstract the complexity of configuring individual Kubernetes resources. They allow for customization through parameters without directly modifying resource files.
  • Reusability: You can easily and quickly deploy the same application stack in various environments or share it across different organizations with minimal changes.
  • Version Control and Rollbacks: Helm effectively tracks the versions of your deployments, enabling easy rollback to previous versions if needed.

Install Helm

Installing Helm is a straightforward process. You can install it via Package Manager, which will require the addition of the Helm repository to your system’s packages list.

$ curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
$ echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
$ sudo apt-get update
$ sudo apt-get install helm
Enter fullscreen mode Exit fullscreen mode

Or use the shell script designed by Helm’s development team, which will discover the machine’s architecture and operating system and install the latest Helm accordingly.

$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
$ chmod 700 get_helm.sh
$ ./get_helm.sh
Enter fullscreen mode Exit fullscreen mode

Anatomy of a Helm Chart

The Helm Chart has a pre-defined folder structure and files crucial for its proper functioning.

.
│   .helmignore
│   LICENSE
│   Chart.yaml
│   values.yaml
│   values.schema.json
├───charts/
├───crds/ 
└───templates/
    │   NOTES.txt
    │   _helpers.tpl
    └───tests/
Enter fullscreen mode Exit fullscreen mode

Let’s describe each one of them one by one:

  • .helmignore: (Optional) It tells Helm to ignore specific files and directories when packing the Chart. It works like a .gitignore file in Git.
  • LICENSE: (Optional) This file contains the license for the chart.
  • Chart.yaml: This file contains the name, description, and version of the Chart.
  • values.yaml: This file holds the default configuration values for the chart. These values can be overridden by user-supplied values when the chart is installed or upgraded.
  • values.schema.yaml: (Optional) If defined, this JSON file imposes a specific structure on the values.yaml file
  • crds/: (Optional) This directory contains the Custom Resource Definitions, which create the necessary custom resources before the rest of the components in the Helm chart are deployed.
  • templates/: This directory contains all the YAML files in charge of the application Kubernetes components (ingress, services, deployments,…). The templates may reference values from values.yaml that are replaced during the installation/Upgrade of the Chart.
  • templates/tests/: This directory contains the Kubernetes manifests of the resources in charge of testing the correctness of the Chart.
  • templates/_helpers.tpl: This file allows you to encapsulate complex logic or repetitive code in a single piece of code so that it’s easy to reuse throughout your Chart.
  • templates/NOTES.txt: This file’s content contains information about the Chart that is rendered in the command line output at the end of a Helm chart’s installation or upgrade process.
  • charts/: If defined in the Chart.yaml file, this folder will contain all the charts (known as subcharts) that this chart depends on. These charts are downloaded during the installation or upgrade of the chart.

Community and Custom Charts

One of the strengths of Helm is its vibrant community and the repository of charts created and maintained by this community. Public Helm Charts were originally stored in the Helm Hub, but in 2021, the platform was replaced by ArtifactHub, A platform developed and maintained by the Cloud Native Computing Foundation (CNCF).

Image description

The platform UX is well done, and clearly describes how to install and uninstall the Charts and provides a security scan report of its vulnerabilities. Here, you can inspect the schema, default values, and templates to verify that they fit your needs.

Image description

Developing public Helm Charts allows developers outside the organization to improve them and sustain the community. However, there are scenarios where using public charts is not possible due to specific configurations, proprietary software, or complex requirements that need to be addressed by existing charts. In these cases, you can host the Helm chart in private Helm repositories running on Cloud services like Amazon S3, or use an artifact repository manager like JFrog Artifactory or GitHub Packages. Accessing the Helm chart will require authentication and only be accessible to those with the appropriate permissions.

Testing a Chart

Testing is a critical part of software development, and so it’s for the development of the Helm charts. Helm provides a dedicated mechanism for testing charts through the templates/tests directory. Tests are processed via Kubernetes resources that perform specific operations to verify if the chart is working correctly. For example, a test for a web application might deploy a Pod that makes an HTTP request to the service created by your Helm chart, checking if it responds correctly.

apiVersion: batch/v1
kind: Job
metadata:
  name: training-test
  annotations:
    "helm.sh/hook": test
spec:
  template:
    spec:
      containers:
        - name: curl
          image: curlimages/curl
          command: 
            - 'curl'
            - '-s'
            - 'http://{{ include "default.fullname" . }}-service:{{ .Values.service.port }}/'
      restartPolicy: Never
  backoffLimit: 1
Enter fullscreen mode Exit fullscreen mode

Tests are triggered via the command helm test <CHART-NAME>.

$ helm test training
NAME: training
LAST DEPLOYED: Thu Dec 21 21:00:35 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE:     training-test
Last Started:   Thu Dec 21 21:13:53 2023
Last Completed: Thu Dec 21 21:13:58 2023
Phase:          Succeeded
Enter fullscreen mode Exit fullscreen mode

Helper file

Helpers are powerful elements that increase the flexibility and dynamism of the templates by encapsulating repetitive and complex template code. These helpers are defined in the _helpers.tpl file and use the define directive of the Go templating language to organize functions and logic in a more structured and reusable manner. For example:

{{- define "default.labels" -}}
helm.sh/chart: {{ include "default.chart" . }}
{{ include "default.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
Enter fullscreen mode Exit fullscreen mode

This snippet defines a helper named default.labels that outputs several Kubernetes labels. As you can see it also accesses Built-in Objects, like .Chart.Name and .Release.Name, Release.Namespace, and use an if condition to add a label according to a value. Once defined, these helpers can be used in the template with the include directive. For example:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    {{- include "default.labels" . | nindent 4 }}
...
Enter fullscreen mode Exit fullscreen mode

Demonstration

In this demonstration, I will create a Helm Chart and deploy a React.js application to an Azure Kubernetes Service (AKS) cluster. The application’s source code is hosted at https://github.com/GTRekter/Training/application and has the following architecture.

Image description

The original Kubernetes manifests used for deployment are as follows:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: training-ingress
spec:
  ingressClassName: nginx
  rules:
    - http:
        paths:
          - path: "/"
            pathType: Prefix
            backend:
              service:
                name: training-service
                port:
                  number: 80
---
apiVersion: v1
kind: Service
metadata:
  name: training-service
spec:
  selector:
    app: training 
spec:
  ports:
    - protocol: TCP
      port: 80
      targetPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: training-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: training
  template:
    metadata:
      labels:
        app: training
    spec:
      containers:
        - name: training-container
          image: "acrtrainingdev01.azurecr.io/training:1.0"
          ports:
            - containerPort: 3000
          resources:
            limits:
              cpu: 100m
              memory: 128Mi
            requests:
              cpu: 100m
              memory: 128Mi
          livenessProbe:
            httpGet:
              path: "/"
              port: 3000
            initialDelaySeconds: 30
            periodSeconds: 30
          env:
            - name: REACT_APP_AUTH0_DOMAIN
              valueFrom:
                secretKeyRef:
                  name: training-secret
                  key: REACT_APP_AUTH0_DOMAIN
            - name: REACT_APP_AUTH0_CLIENT_ID
              valueFrom:
                secretKeyRef:
                  name: training-secret
                  key: REACT_APP_AUTH0_CLIENT_ID
---
apiVersion: v1
kind: Secret
metadata:
  name: training-secret
type: Opaque
data:
  REACT_APP_AUTH0_DOMAIN: [base64-encoded-value]
  REACT_APP_AUTH0_CLIENT_ID: [base64-encoded-value]
Enter fullscreen mode Exit fullscreen mode

We will use these manifests as templates for the Helm Chart. They will be modified to use helpers and parameters defined in the Helm Chart to make the deployment more dynamic.

Build and Publish the Docker image

The first step involves creating and publishing a container with your application. Start by logging into the ACR instance using the following command in the Azure CLI:

$ docker login acrtrainingdev01.azurecr.io
Enter fullscreen mode Exit fullscreen mode

You can find the credentials in the Access keys section of your Azure Container Registry after enabling the Admin user.

Image description

Next, build your image while maintaining the naming convention of the repository in the tag:

$ docker build -t acrtrainingdev01.azurecr.io/training:1.0 .
Enter fullscreen mode Exit fullscreen mode

After the build, push the image to the Azure Container Registry to make it available for Kubernetes deployments:

$ docker push acrtrainingdev01.azurecr.io/training:1.0
Enter fullscreen mode Exit fullscreen mode

Create the Helm chart

Next, we will use the following command in the Helm CLI to create a new chart:

$ helm create training
Enter fullscreen mode Exit fullscreen mode

This command generates a directory named training, along with the common directories and files typically used in a chart:

.
│   .helmignore
│   Chart.yaml
│   values.yaml
├───charts
└───templates
    │   deployment.yaml
    │   hpa.yaml
    │   ingress.yaml
    │   NOTES.txt
    │   service.yaml
    │   serviceaccount.yaml
    │   _helpers.tpl
    └───tests
            test-connection.yaml
Enter fullscreen mode Exit fullscreen mode

Since we are going to use our existing manifests as starting points, replace all the files from the templates directory with the manifests listed above.

NGINX Chart Dependency

To demonstrate how Helm manages dependencies, we will use the NGINX Ingress implementation for the Ingress controller. To do so, we must define the dependency in the Chart.yaml.

dependencies:
  - name: nginx-ingress
    version: "1.1.0"
    repository: "https://helm.nginx.com/stable"
Enter fullscreen mode Exit fullscreen mode

Then, to update and download the dependencies, execute the following command:

$ helm dependency update
Enter fullscreen mode Exit fullscreen mode

After this, the package containing the necessary templates, Chart.yaml file, CRDs, and more will be downloaded into the charts directory.

.
│   .helmignore
│   Chart.lock
│   Chart.yaml
│   values.yaml
├───charts/
│       nginx-ingress-1.1.0.tgz
└───templates/
Enter fullscreen mode Exit fullscreen mode

Using Helpers and Values

Helm charts utilize values and helpers to convert static manifests into dynamic templates. Specific placeholders, such as {{ include ... }} for parameters and {{ .Values... }} for values defined in the values.yaml file, are replaced with corresponding values during chart installation and upgrade. For example, in our Ingress resource, we can dynamically generate resource names using helpers, eliminating the need for manual adjustments with each deployment:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: training-ingress
spec:
  ingressClassName: nginx
  rules:
    - http:
        paths:
          - path: "/"
            pathType: Prefix
            backend:
              service:
                name: training-service
                port:
                  number: 80
Enter fullscreen mode Exit fullscreen mode

Let’s start by creating a helper for resource naming. In this tutorial, we will create a helper named default.name, which by default returns the Chart's name or the Release's name if specified. To simplify the function, let’s create the variable $name and assign it the value of the predefined Helm value .Chart.Name.

{{- define "default.name" -}}
{{- $name := default .Chart.Name .Release.Name }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
Enter fullscreen mode Exit fullscreen mode

Next, we will create helpers for metadata, adding various metadata elements related to the chart's version and the service. For convenience, we will also define separate helpers for selector labels:

{{- define "default.selectorLabels" -}}
app.kubernetes.io/name: {{ include "default.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

{{- define "default.labels" -}}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{ include "default.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
{{- end }}
Enter fullscreen mode Exit fullscreen mode

Then we update the Ingress to incorporate these helpers:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: {{ include "default.name" . }}-ingress
  labels:
    {{- include "default.labels" . | nindent 4 }}
spec:
  rules:
    - http:
        paths:
          - path: "/"
            pathType: Prefix
            backend:
              service:
                name: training-service
                port:
                  number: 80
Enter fullscreen mode Exit fullscreen mode

Now, let’s examine the values.yaml file, which will hold user-supplied values influencing resource behavior. In this case, we are going to create nested values to group the values related to the ingress.

ingress:
  className: nginx
  path: "/"
  pathType: Prefix
Enter fullscreen mode Exit fullscreen mode

We then apply them in the template. For the NGINX Ingress, I added the host field, as it is a required value for this type of Ingress configuration.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: {{ include "default.fullname" . }}-ingress
  labels:
    {{- include "default.labels" . | nindent 4 }}
spec:
  {{- if .Values.ingress.className }}
  ingressClassName: {{ .Values.ingress.className }}
  {{- end }}
  rules:
    - host: {{ .Values.ingress.host }}
      http:
        paths:
          - path: {{ .Values.ingress.path }}
            pathType: {{ .Values.ingress.pathType }}
            backend:
              service:
                name: {{ include "default.fullname" . }}-service
                port:
                  number: {{ .Values.service.port }}
Enter fullscreen mode Exit fullscreen mode

Let’s repeat the process for all the files. The final values.yaml file will look like the following:

# Helpers
nameOverride: sample
fullnameOverride: training-sample

# Service configuration
service:
  port: 80
  targetPort: 3000

# Ingress configuration
ingress:
  path: "/"
  className: "nginx"
  host: "ivanporta.info"
  pathType: Prefix

# Secret configuration
auth0:
  clientId: ""
  domain: ""

# Deployment configuration
deployment:
  replicaCount: 2
  image:
    repository: "acrtrainingdev01.azurecr.io"
    name: "training"
    tag: "2.0"
    containerPort: 3000
    resources:
      limits:
        cpu: 100m
        memory: 128Mi
      requests:
        cpu: 100m
        memory: 128Mi
  livenessProbe:
    path: "/"
    port: 3000
    initialDelaySeconds: 30
    periodSeconds: 30
Enter fullscreen mode Exit fullscreen mode

The final templates will looks like the following:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: {{ include "default.name" . }}-ingress
  labels:
    {{- include "default.labels" . | nindent 4 }}
spec:
  ingressClassName: nginx
  rules:
    - http:
        paths:
          - path: {{ .Values.ingress.path }}
            pathType: {{ .Values.ingress.pathType }}
            backend:
              service:
                name: {{ include "default.name" . }}-service
                port:
                  number: {{ .Values.service.port }}
---
apiVersion: v1
kind: Service
metadata:
  name: {{ include "default.name" . }}-service
  labels:
    {{- include "default.labels" . | nindent 4 }}
spec:
  selector:
    {{- include "default.selectorLabels" . | nindent 4 }}
  ports:
    - protocol: TCP
      port: {{ .Values.service.port }}
      targetPort: {{ .Values.service.targetPort }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "default.name" . }}-deployment
  labels:
    {{- include "default.labels" . | nindent 4 }}
spec:
  replicas: {{ .Values.deployment.replicaCount }}
  selector:
    matchLabels:
      {{- include "default.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        {{- include "default.selectorLabels" . | nindent 8 }}
    spec:
      containers:
        - name: {{ include "default.name" . }}
          image: "{{ .Values.deployment.image.repository }}/{{ .Values.deployment.image.name }}:{{ .Values.deployment.image.tag }}"
          ports:
            - containerPort: {{ .Values.deployment.image.containerPort }}
          resources:
            limits:
              cpu: {{ .Values.deployment.image.resources.limits.cpu }}
              memory: {{ .Values.deployment.image.resources.limits.memory }}
            requests:
              cpu: {{ .Values.deployment.image.resources.requests.cpu }}
              memory: {{ .Values.deployment.image.resources.requests.memory }}
          livenessProbe:
            httpGet:
              path: {{ .Values.deployment.livenessProbe.path }}
              port: {{ .Values.deployment.livenessProbe.port }}
            initialDelaySeconds: {{ .Values.deployment.livenessProbe.initialDelaySeconds }}
            periodSeconds: {{ .Values.deployment.livenessProbe.periodSeconds }}
          env:
            - name: REACT_APP_AUTH0_DOMAIN
              valueFrom:
                secretKeyRef:
                  name: {{ include "default.name" . }}-secret
                  key: REACT_APP_AUTH0_DOMAIN
            - name: REACT_APP_AUTH0_CLIENT_ID
              valueFrom:
                secretKeyRef:
                  name: {{ include "default.name" . }}-secret
                  key: REACT_APP_AUTH0_CLIENT_ID
---
apiVersion: v1
kind: Secret
metadata:
  name: {{ include "default.name" . }}-secret
type: Opaque
data:
  REACT_APP_AUTH0_DOMAIN: {{ .Values.auth0.domain | b64enc | quote }}
  REACT_APP_AUTH0_CLIENT_ID: {{ .Values.auth0.clientId | b64enc | quote }}
Enter fullscreen mode Exit fullscreen mode

Installation of the resources defined in the Helm chart

Once the Helm chart is ready, the next step is to deploy it to the target Kubernetes cluster. The first step involves gathering the credentials to interact with the Kubernetes API and deploy the resources. In this demonstration, we will use Azure Kubernetes Service (AKS) so the commands are as follows:

$ az account set --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
$ az aks get-credentials --resource-group rg-training-dev --name aks-training-01
Enter fullscreen mode Exit fullscreen mode

With the credentials set, you can proceed to install the Chart using the Helm CLI command:

$ helm install training ./kubernetes/training
Enter fullscreen mode Exit fullscreen mode

Here, ./kubernetes/training denotes the relative path to the directory containing the Helm Chart. After the installation, you can verify that the deployment went smoothly by checking its status:

$ helm status training
NAME: training
LAST DEPLOYED: Thu Dec 21 21:00:35 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE:     training-test
Last Started:   Thu Dec 21 21:13:53 2023
Last Completed: Thu Dec 21 21:13:58 2023
Phase:          Succeeded
Enter fullscreen mode Exit fullscreen mode

And kubectl to check the resources in your Kubernetes cluster:

$ kubectl get all
NAME                                                     READY   STATUS    RESTARTS   AGE
pod/training-nginx-ingress-controller-67967b6574-tfcvs   1/1     Running   0          3h56m
pod/training-sample-deployment-55486f6456-zkmxh          1/1     Running   0          3h38m
pod/training-sample-deployment-55486f6456-zq794          1/1     Running   0          3h39m

NAME                                        TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
service/kubernetes                          ClusterIP      10.0.0.1       <none>        443/TCP                      12h
service/training-nginx-ingress-controller   LoadBalancer   10.0.107.101   20.8.26.146   80:30294/TCP,443:30560/TCP   3h53m
service/training-sample-service             ClusterIP      10.0.25.87     <none>        80/TCP                       3h56m

NAME                                                READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/training-nginx-ingress-controller   1/1     1            1           3h56m
deployment.apps/training-sample-deployment          2/2     2            2           3h56m

NAME                                                           DESIRED   CURRENT   READY   AGE
replicaset.apps/training-nginx-ingress-controller-67967b6574   1         1         1       3h56m
replicaset.apps/training-sample-deployment-55486f6456          2         2         2       3h39m
replicaset.apps/training-sample-deployment-6d58889f5c          0         0         0       3h46m
replicaset.apps/training-sample-deployment-b558cd9db           0         0         0       3h56m
Enter fullscreen mode Exit fullscreen mode

We can now access the application by using its domain name.

Image description

Updating the Chart

Updating a Helm chart, including tasks like modifying templates, adding new resources, or adjusting configuration values (such as environment variables or replica scaling), can typically be done using a single command. This section demonstrates how to update an existing Helm Chart by adding a Horizontal Pod Autoscaler (HPA).

First create a hpa.yaml file in the templates directory with the following content:

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: {{ include "default.fullname" . }}-hpa
spec:
  maxReplicas: {{ .Values.hpa.maxReplicas }} 
  minReplicas: {{ .Values.hpa.minReplicas }}
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: {{ include "default.fullname" . }}-deployment
  targetCPUUtilizationPercentage: {{ .Values.hpa.targetCPUUtilizationPercentage }}
Enter fullscreen mode Exit fullscreen mode

Next, add the related values to the values.yaml file:

hpa:
  maxReplicas: 10
  minReplicas: 1
  targetCPUUtilizationPercentage: 50
Enter fullscreen mode Exit fullscreen mode

Finally, update and redeploy the Helm chart using the command:

$ helm upgrade training ./kubernetes/training
Release "training" has been upgraded. Happy Helming!
NAME: training
LAST DEPLOYED: Tue Dec 26 10:15:26 2023
NAMESPACE: default
STATUS: deployed
REVISION: 2
Enter fullscreen mode Exit fullscreen mode

To verify, inspect the Kubernetes resources, and you should see the new Horizontal Autoscaler listed:

$ kubectl get hpa
NAME                  REFERENCE                               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
training-sample-hpa   Deployment/training-sample-deployment   1%/50%    1         10        2          77s
Enter fullscreen mode Exit fullscreen mode

Resources

Top comments (0)