Overview
This blog post summarizes my presentation delivered at AWS Community Day PH 2024, held in AWS Office - Taguig City, Philippines. The presentation explored the concept of automated scaling in Kubernetes and showcased Karpenter
, an open-source tool for autoscaling cluster resources.
Kubernetes Scaling
While Kubernetes excels at scaling workloads through kube-scheduler
, it lacks the ability to automatically manage the underlying compute resources of the cluster (CPU, memory and storage). This is where tools like Karpenter
come in.
Karpenter
continuously monitors unscheduled pods and their resource requirements. Based on this information, it selects the most suitable instance type from your cloud provider and provisions new nodes to accommodate the workload demands. This "just-in-time" provisioning ensures your applications always have the resources they need to run smoothly, without the risk of over provisioning and incurring unnecessary costs.
Diagram Reference: https://karpenter.sh
Also worth noting of -
Karpenter
just recently graduated from Beta version. In August, v1.x was released.
Karpenter in Action
If you want to see Karpenter
in action, you can use the OpenTofu template in the repository below to provision an Amazon EKS cluster with Karpenter
pre-configured:
romarcablao / scaling-with-karpenter
AWSCD Demo
Scaling With Karpenter
This repository is made for a demo in AWS Community Day Philippines 2024. You may also want to watch Karpenter in action here.
Installation
Depending on your OS, select the installation method here: https://opentofu.org/docs/intro/install/
Provision the infrastructure
- Make necessary adjustment on the variables.
- Run
tofu init
to initialize the modules and other necessary resources. - Run
tofu plan
to check what will be created/deleted. - Run
tofu apply
to apply the changes. Typeyes
when asked to proceed.
Fetch kubeconfig
to access the cluster
aws eks update-kubeconfig --region $REGION --name $CLUSTER_NAME
For the NodePool
configuration, you can use the one defined within the repository. The configuration would look like this:
A video recording was also available to see Karpenter
in action. Few things to note, the video shows two applications - (1) Terminal
running eks-node-viewer
on the top and (2) Lens
showing the deployment we are about to scale and the Karpenter
logs.
The video focuses on three key actions to illustrate how Karpenter
responds to cluster resource autoscaling needs:
-
Scaling from zero (0) to two (2) replicas: This demonstrates how
Karpenter
provisions new nodes when additional resources are required. -
Scaling from two (2) to six (6) replicas: This showcases
Karpenter
's ability to scale up further as demand increases. -
Scaling from six (6) back to zero (0): This demonstrates how
Karpenter
can also scale down and terminate nodes when resources are no longer needed, optimizing resource utilization.
By watching this video demonstration, you can gain a practical understanding of how Karpenter
dynamically provisions and manages cluster resources based on workload demands.
Ready to explore the potential of Karpenter
for your Kubernetes clusters? Check out the links below to get started 🚀
Documentations
Workshops
- https://catalog.workshops.aws/karpenter/en-US
- https://www.eksworkshop.com/docs/autoscaling/compute/karpenter
Blogs
Top comments (0)