We're reinventing payments. Super powers free payments for businesses and more rewarding shopping for customers, so that everyone wins. https://www.superpayments.com/
We're using Karpenter to manage our Kubernetes node scaling.
We're big fans of how fast Karpenter can provision just-in-time nodes for us across our EKS clusters but there was one sticking point, for obvious reasons the Karpenter controller pods can't run on Karpenter managed nodes.
To get around this we used AWS EKS managed node groups as init
nodes and pinned Karpenter to said nodes. We provisioned a node group with a minimum and maximum of 2 nodes for Karpenter mostly (although other pods could run on these nodes too, to avoid wasting compute resources!)
The downside is that updating managed node groups is slow; updating two nodes, with a maximum of one available at a time, took between six and ten minutes, and we wanted to speed up this process.
The simple solution? Remove the init nodes! But then, where do we run Karpenter? Enter Fargate.
We created an EKS Fargate profile via our EKS Terraform module with a selector for the Karpenter namespace:
resource "aws_eks_fargate_profile" "karpenter" {
cluster_name = aws_eks_cluster.cluster.name
fargate_profile_name = "karpenter"
pod_execution_role_arn = aws_iam_role.fargate.arn
subnet_ids = var.private_subnets
selector {
namespace = "karpenter"
}
}
Pod Execution Role
If you've ever used ECS, you'll be familiar with the pod execution role. For Fargate and EKS, it's a straightforward role with two AWS managed policies attached:
resource "aws_iam_role" "fargate" {
name = "${var.cluster_name}-fargate"
assume_role_policy = jsonencode({
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "eks-fargate-pods.amazonaws.com"
}
}]
Version = "2012-10-17"
})
}
resource "aws_iam_role_policy_attachment" "fargate" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy"
role = aws_iam_role.fargate.name
}
resource "aws_iam_role_policy_attachment" "fargate_eks_cni" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.fargate.name
}
Karpenter Helm Install
We use the hashicorp/helm Terraform provider to install both the Karpenter and CRD charts directly from our EKS module. This ensures that Karpenter is up and running before anything else, ready to provision compute.
Next, we set the namespace for the Karpenter chart to match the selector in the Fargate profile, which in our case is karpenter, and we're off to the races!
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
karpenter-75c664b7cb-9z9lr 1/1 Running 0 5d <snip> fargate-<snip>.eu-west-2.compute.internal <none> <none>
karpenter-75c664b7cb-fxhb2 1/1 Running 0 5d <snip> fargate-<snip>.eu-west-2.compute.internal
Notes
By default we're using Fargate's minimum resources which are 0.25 vCPU
and 0.5GB RAM
per task.
Currently you can't specify ARM when creating Fargate tasks on EKS so we're currently using x86 but the cost is around $20 per month for both tasks.
We've generally reduced the number of nodes across our EKS clusters too, resulting in some cost savings but much less waiting around for the Platform team!
Top comments (0)