DEV Community

Leon Stigter
Leon Stigter

Posted on • Originally published at retgits.com on

How To Make Your AWS EKS Cluster Use Fargate Using Pulumi And Golang

At re:Invent, AWS introduced the ability to have EKS run pods on AWS Fargate, and Fargate is cheaper than hosting Kubernetes yourself. In the last post I created an EKS cluster, so let’s add this new capability to the cluster and remove the need to manage or provision infrastructure for our pods.

The complete project is available on GitHub.

Configuration

The minimum configuration for the Fargate profile is a name, the Kubenetes namespace it’ll work in, and the IAM role it needs to run the pods on AWS Fargate. The configuration below, which you can copy/paste into the YAML file from the previous blog, has three parameters. The parameter fargate:profile-name is the name of the Fargate profile, the parameter fargate:namespace is the Kubernetes namespace, and fargate:execution-role-arn is the ARN of the IAM role. For more details on how to create the role, check out the AWS docs.

fargate:profile-name: EKSFargateProfile
fargate:namespace: example
fargate:execution-role-arn: "arn:aws:iam::ACCOUNTID:role/EKSFargatePodExecutionRole"

Enter fullscreen mode Exit fullscreen mode

You can either use the command line, like pulumi config set fargate:profile-name "EKSFargateProfile" to add these new configuration variables, or you can add them directly into the yaml file. The yaml file with all the configuration is called Pulumi.<name of your project>.yaml.

Adding the Fargate profile

The code below is an extension from the code created in the previous post. So you can copy/paste this snippet into your Go code too. Walking through the code, it gets the name of the profile and the namespace from the YAML file. The fargateProfileArgs the name of the cluster and subnets from previous blog posts so check those out if you haven’t already. The call to eks.NewFargateProfile() adds the Fargate profile to your EKS cluster.

// Create an EKS Fargate Profile
fargateProfileName := getEnv(ctx, "fargate:profile-name", "unknown")

selectors := make([]map[string]interface{}, 1)
namespaces := make(map[string]interface{})
namespaces["namespace"] = getEnv(ctx, "fargate:namespace", "unknown")
selectors[0] = namespaces

fargateProfileArgs := &eks.FargateProfileArgs{
    ClusterName: clusterName,
    FargateProfileName: fargateProfileName,
    Tags: tags,
    SubnetIds: subnets["subnet_ids"],
    Selectors: selectors,
    PodExecutionRoleArn: getEnv(ctx, "fargate:execution-role-arn", "unknown"),
}

fargateProfile, err := eks.NewFargateProfile(ctx, fargateProfileName, fargateProfileArgs)
if err != nil {
    fmt.Println(err.Error())
    return err
}

ctx.Export("FARGATE-PROFILE-ID", fargateProfile.ID())

Enter fullscreen mode Exit fullscreen mode

Running the code

Like the previous time, the last thing to do is run pulumi up to tell Pulumi to go add the Fargate profile to your EKS cluster! If you’re using the same project and stack, Pulumi will automatically realize it needs to add the profile to the existing cluster and won’t create a new EKS cluster.

$ pulumi up
Previewing update (builderstack):

     Type Name Plan       
     pulumi:pulumi:Stack builder-builderstack             
 + └─ aws:eks:FargateProfile EKSFargateProfile create     

Outputs:
  + FARGATE-PROFILE-ID: output<string>

Resources:
    + 1 to create
    5 unchanged

Do you want to perform this update? yes
Updating (builderstack):

     Type Name Status      
     pulumi:pulumi:Stack builder-builderstack              
 + └─ aws:eks:FargateProfile EKSFargateProfile created     

Outputs:
    CLUSTER-ID : "myEKSCluster"
  + FARGATE-PROFILE-ID: "myEKSCluster:EKSFargateProfile"
    SUBNET-IDS : [
        [0]: "subnet-0a1909bec2e936bd7"
        [1]: "subnet-09d229c2eb8061979"
    ]
    VPC-ID : "vpc-0437c750acf1050c3"

Resources:
    + 1 created
    5 unchanged

Duration: 2m27s

Permalink: https://app.pulumi.com/retgits/builder/builderstack/updates/4

Enter fullscreen mode Exit fullscreen mode

The permalink at the bottom of the output takes you to the Pulumi console where you can see all the details of the execution of your app and the resources that were created.

The Pulumi console also has really useful links to the AWS console to see the resources.

The Pulumi console also has really useful links to the AWS console to see the resources.

Top comments (6)

Collapse
 
dariusx profile image
Darius

What's the big-picture, "stepping-back" picture here? Would I not need to provision nodes for k8s?

Collapse
 
retgits profile image
Leon Stigter

That’s correct. If you’re telling the cluster you only want to use pods running with the Fargate compatibility, there’s no need to provision nodes yourself. EKS and Fargate will work together to spin up the containers and act as nodes for as long as the container needs to run.

Collapse
 
dariusx profile image
Darius

That's really interesting to me. Curious how it all appears... like: if I do
kubectl get po -o wide
does appear to be running on "nodes" I would not recognize as mine?

Also, trying to get my head around how security-groups etc. would work. Trying to build the "big picture" diagram in my mind. Guess the only way is to try it out.

Thanks for doing this post.

Thread Thread
 
retgits profile image
Leon Stigter

You’ll end up seeing node names like “ fargate-ip-.us-east-2.compute.internal”.

I haven’t tried it out with different security groups other than my cluster started with to be honest. If you end up trying it out, I’d love to hear your thoughts and feedback on it.

Thread Thread
 
dariusx profile image
Darius

I'll definitely get back to you, but it could be a while before I try this :) . My experience has been on-prem OpenShift, but not doing much infrastructure. Currently, just got on to AWS-EKS, so learning the ropes there.

Coincidentally, just this morning, I was spec-ing out the EKS cluster, as a development cluster, and while estimating costs, it stuck me that we'd like to scale in and out quite a bit.

So, in that context Fargate seems like it would be cost-effective, while not requiring as much admin once we iron out the basics.

Do you notice any difference in deployment time, when spinning up a Pod using Fargate, compared to doing so on a node that's already in the cluster?

Thread Thread
 
retgits profile image
Leon Stigter

I haven’t seen too much of a difference. That’s taking into account that the image is already pulled and your cluster doesn’t have to download it from the internet.

Good luck!