DEV Community

Cover image for Scaling an AWS EKS with Karpenter using Helm Provider with Terraform. Kubernetes Series - Episode 4
Javier Sepúlveda
Javier Sepúlveda

Posted on

Scaling an AWS EKS with Karpenter using Helm Provider with Terraform. Kubernetes Series - Episode 4

Cloud people!

In the last Episode of this series we covered the steps to configure velero using helm charts within Kubernetes to backup or restore all objects in the cluster, or filter objects.

In this episode, the focus is on deploying Karpenter within Kubernetes to scale the cluster to meet demand.

Requirements

Let's see how we can do this using terraform and a new submodule of eks to setup karpenter with terraform.

Karpenter is a tool opensource developed in AWS Labs that allow solved many challenges that a traditional auto scaling not resolved, AWS EKS supports two products for auto scaling. Cluster Autoscaler and Karpenter. check this link for more information.

The focus is on adding karpenter to the EKS cluster, considering that the EKS cluster was deployed.

This is the link for all code terraform in branch episode4.

GitHub logo segoja7 / EKS

Deployments for EKS

Step 1.

It is necessary to add providers to connect and create resources within the eks cluster with helm and kubectl, the kubectl provider is used for deploying manifests and testing.

terraform {
  required_version = ">= 1.4.0"

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = ">= 5.0"
    }
    helm = {
      source  = "hashicorp/helm"
      version = ">= 2.9"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = ">= 2.20"
    }
    kubectl = {
      source  = "alekc/kubectl"
      version = ">= 2.0.2"
    }
  }
}

provider "aws" {
  region  = local.region
  profile = local.profile
}


provider "kubernetes" {
  host                   = module.eks.cluster_endpoint
  cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    command     = "aws"
    # This requires the awscli to be installed locally where Terraform is executed
    args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name, "--profile", local.profile]
  }
}

provider "helm" {
  kubernetes {
    host                   = module.eks.cluster_endpoint
    cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

    exec {
      api_version = "client.authentication.k8s.io/v1beta1"
      command     = "aws"
      # This requires the awscli to be installed locally where Terraform is executed
      args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name, "--profile", local.profile]
    }
  }
}

provider "kubectl" {
  apply_retry_count      = 5
  host                   = module.eks.cluster_endpoint
  cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
  load_config_file       = false

  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    command     = "aws"
    # This requires the awscli to be installed locally where Terraform is executed
    args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name, "--region", "us-east-1", "--profile", local.profile]
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 2.

It is necessary, to add tags in the subnets and security groups for that karpenter can be work, these tags are used later.

  private_subnet_tags = {
    "karpenter.sh/discovery"              = local.name
  }
Enter fullscreen mode Exit fullscreen mode
  tags = merge(local.tags, {
    # NOTE - if creating multiple security groups with this module, only tag the
    # security group that Karpenter should utilize with the following tag
    # (i.e. - at most, only one security group should have this tag in your account)
    "karpenter.sh/discovery" = "${local.name}"
  })
Enter fullscreen mode Exit fullscreen mode

Step 3.

This module configures all the necessary for using karpenter.
This scenario is based in this example from registry

module "karpenter" {
  source  = "terraform-aws-modules/eks/aws//modules/karpenter"
  version = "19.20.0"

  cluster_name                    = module.eks.cluster_name
  irsa_oidc_provider_arn          = module.eks.oidc_provider_arn
  irsa_namespace_service_accounts = ["karpenter:karpenter"]

  create_iam_role      = false
  iam_role_arn         = module.eks.eks_managed_node_groups["cloud-people"].iam_role_arn
  irsa_use_name_prefix = false

  tags = local.tags
}
Enter fullscreen mode Exit fullscreen mode

karpenterIRSArole

Step 4.

In this step is to add karpenter using generic helm releases.

resource "helm_release" "karpenter" {
  namespace        = "karpenter"
  create_namespace = true

  name                = "karpenter"
  repository          = "oci://public.ecr.aws/karpenter"
  repository_username = data.aws_ecrpublic_authorization_token.token.user_name
  repository_password = data.aws_ecrpublic_authorization_token.token.password
  chart               = "karpenter"
  version             = "v0.31.3"

  set {
    name  = "settings.aws.clusterName"
    value = module.eks.cluster_name
  }

  set {
    name  = "settings.aws.clusterEndpoint"
    value = module.eks.cluster_endpoint
  }

  set {
    name  = "serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn"
    value = module.karpenter.irsa_arn
  }

  set {
    name  = "settings.aws.defaultInstanceProfile"
    value = module.karpenter.instance_profile_name
  }

  set {
    name  = "settings.aws.interruptionQueueName"
    value = module.karpenter.queue_name
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 5.

With karpenter installed, only need to make test with Autoscaling Karpenter, but first is necessary to add a definition of personalized resources (CRD) check this link for more information.

With this CRD it is possible to add various configurations based on taints, availability zones, instances type, processor architecture and others.

resource "kubectl_manifest" "karpenter_provisioner" {
  yaml_body = <<-YAML
    apiVersion: karpenter.sh/v1alpha5
    kind: Provisioner
    metadata:
      name: default
    spec:
      requirements:
        - key: karpenter.sh/capacity-type
          operator: In
          values: ["on-demand"] #"spot"
        - key: "node.kubernetes.io/instance-type"
          operator: In
          values: ["c5.large","c5a.large", "c5ad.large", "c5d.large", "c6i.large", "t2.medium", "t3.medium", "t3a.medium"]
      limits:
        resources:
          cpu: 1000
      providerRef:
        name: default
      ttlSecondsAfterEmpty: 30
  YAML

  depends_on = [
    helm_release.karpenter
  ]
}
Enter fullscreen mode Exit fullscreen mode

Note: When a CRD provisioner has not specified the processor architecture or any instance type for the new nodes, by default karpenter will use all instances and all architectures. This means that a node is created for your application and errors may occur due to incompatibilities with the processor architecture.

Step 6.

Node templates allow you to configure AWS specific parameters(NodeTemplate) check this link for more information.

In this step karpenter use the tags created in step2.

resource "kubectl_manifest" "karpenter_node_template" {
  yaml_body = <<-YAML
    apiVersion: karpenter.k8s.aws/v1alpha1
    kind: AWSNodeTemplate
    metadata:
      name: default
    spec:
      subnetSelector:
        karpenter.sh/discovery: ${module.eks.cluster_name}
      securityGroupSelector:
        karpenter.sh/discovery: ${module.eks.cluster_name}
      tags:
        karpenter.sh/discovery: ${module.eks.cluster_name}
  YAML

  depends_on = [
    helm_release.karpenter
  ]
}
Enter fullscreen mode Exit fullscreen mode

Step 7.

This step is created a sample pause deployment to demonstrate scaling.

# Example deployment using the [pause image](https://www.ianlewis.org/en/almighty-pause-container)
# and starts with zero replicas
resource "kubectl_manifest" "karpenter_example_deployment" {
  yaml_body = <<-YAML
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: inflate
    spec:
      replicas: 0
      selector:
        matchLabels:
          app: inflate
      template:
        metadata:
          labels:
            app: inflate
        spec:
          terminationGracePeriodSeconds: 0
          containers:
            - name: inflate
              image: public.ecr.aws/eks-distro/kubernetes/pause:3.7
              resources:
                requests:
                  cpu: 1
  YAML

  depends_on = [
    helm_release.karpenter
  ]
}
Enter fullscreen mode Exit fullscreen mode

Step 8.

Scale up the sample pause deployment to see Karpenter respond by provisioning nodes to support the workload.

kubectl scale deployment inflate --replicas 50
Enter fullscreen mode Exit fullscreen mode

To view logs

kubectl logs -f -n karpenter -l app.kubernetes.io/name=karpenter -c controller
Enter fullscreen mode Exit fullscreen mode

Scaling nodes karpenter

Step 9.

Clean up nodes.

 kubectl scale deployment inflate --replicas 0
Enter fullscreen mode Exit fullscreen mode

Clean up nodes

Conclusion, Karpenter is tool for the lifecyle of the node, additional is a great option to handle costs using the best options based in pricing.

If you have any questions, please leave them in the comments!

Successful!!

Top comments (0)