In this tutorial, we'll learn how to use Terraform to manage an AWS Elastic Kubernetes Service (EKS) cluster.
What is Terraform?
Terraform is an Infrastructure-as-Code (IaC) tool that enables you to define and provision infrastructure resources using a declarative configuration language.
Why Use Terraform for AWS EKS?
- Infrastructure-as-Code: Automate and version control your infrastructure.
- Scalability: Easily scale your EKS cluster with Terraform configurations.
- Efficiency: Provision of multiple AWS services with a single tool.
Prerequisites
- Terraform - Installed on your local machine. You can set it up following this guide
- AWS Account – You’ll need an AWS account to access the AWS EKS and some other services. If you don’t have one, sign up here.
- AWS CLI: Installed and configured with your AWS credentials.
- kubectl: Installed to manage your EKS cluster from your machine.
Table of Contents
- Create Your Terraform Project Directory
- Define the AWS Provider
- Define the VPC and Networking Resources
- Define the EKS Cluster
- Create EKS Worker Nodes
- Apply the Terraform Configuration
- Configure kubectl to Access the EKS Cluster
- Deploy an Application Using Terraform
- Clean Up
Now, let’s get to it!
Step 1 — Create Your Terraform Project Directory
We’ll need to create a project directory where all our Terraform configuration files will live.
mkdir terraform-eks
cd terraform-eks
Step 2 — Define the AWS Provider (main.tf)
Create main.tf file and add the following configuration:
provider "aws" {
region = "eu-west-2"
}
Here, we make Terraform use AWS as the provider and define the region in eu-west-2
, but you can choose a region closer to your desired location.
Step 3 — Define the VPC and Networking Resources (network.tf)
AWS EKS requires a Virtual Private Cloud (VPC) and subnets to run.
Add the following configuration to the network.tf file. This will create a VPC, subnets, an internet gateway and route tables for your EKS cluster.
data "aws_availability_zones" "available" {}
resource "aws_vpc" "eks_vpc" {
cidr_block = "10.0.0.0/16" # IP range for the VPC
}
resource "aws_subnet" "eks_subnet" {
count = 2
vpc_id = aws_vpc.eks_vpc.id
cidr_block = cidrsubnet(aws_vpc.eks_vpc.cidr_block, 8, count.index)
availability_zone = element(data.aws_availability_zones.available.names, count.index)
map_public_ip_on_launch = true # Enable auto-assign public IP
}
resource "aws_internet_gateway" "eks_igw" {
vpc_id = aws_vpc.eks_vpc.id
}
resource "aws_route_table" "eks_route_table" {
vpc_id = aws_vpc.eks_vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.eks_igw.id
}
}
resource "aws_route_table_association" "eks_route_table_assoc" {
count = 2 # Associates the route table with each subnet
subnet_id = element(aws_subnet.eks_subnet.*.id, count.index)
route_table_id = aws_route_table.eks_route_table.id
}
Step 4 — Define the EKS Cluster (eks.tf)
Now, let's create the EKS cluster itself, along with the required IAM role to manage the cluster. Create an eks.tf file and add the following configuration to it.
resource "aws_iam_role" "eks_role" {
name = "eks-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = ["eks.amazonaws.com", "ec2.amazonaws.com"]
}
}
]
})
}
resource "aws_iam_role_policy_attachment" "eks_policy" {
role = aws_iam_role.eks_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
}
resource "aws_iam_role_policy_attachment" "eks_node_policy" {
role = aws_iam_role.eks_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
}
resource "aws_iam_role_policy_attachment" "eks_cni_policy" {
role = aws_iam_role.eks_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
}
resource "aws_iam_role_policy_attachment" "eks_ec2_policy" {
role = aws_iam_role.eks_role.name
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
}
resource "aws_eks_cluster" "eks_cluster" {
name = "eks-cluster"
role_arn = aws_iam_role.eks_role.arn
vpc_config {
subnet_ids = aws_subnet.eks_subnet[*].id
}
}
- The EKS cluster needs an IAM role that grants it permissions to manage AWS services. Therefore, we attach
eks_role
with the necessary policies to EKS. - We also create an EKS cluster using
aws_eks_cluster
and specify the VPC subnets created earlier.
Step 5 — Create EKS Worker Nodes (eks-workers.tf)
Create eks-workers.tf file and add the following configuration. This will create a node group that will scale between 1 to 3 nodes.
resource "aws_eks_node_group" "node_group" {
cluster_name = aws_eks_cluster.eks_cluster.name
node_group_name = "eks-node-group"
node_role_arn = aws_iam_role.eks_role.arn
subnet_ids = aws_subnet.eks_subnet[*].id
scaling_config {
desired_size = 2 # Initial number of nodes
max_size = 3 # Maximum number of nodes
min_size = 1 # Minimum number of nodes
}
instance_types = ["t3.medium"] # Type of EC2 instances for worker nodes
}
- We define an EKS node group, which is a set of EC2 instances (worker nodes) that run the Kubernetes workloads.
- The node group can scale between 1 and 3 nodes, depending on your workload's demand.
Step 6 — Apply the Terraform Configuration
Initialize Terraform to download the necessary plugins for AWS:
terraform init
Before applying, it’s best practice to see the changes Terraform plans to make in our infrastructure. You can achieve this with the following command:
terraform plan
Once the planning looks good, you can apply the configuration to create the EKS cluster and its resources with the following command:
terraform apply
Confirm with yes
when prompted.
Now, Terraform has created the EKS cluster, VPC, subnets, worker nodes and IAM roles based on the configurations we've written.
If you check your AWS console, your EKS cluster should be up and running!
Step 7 — Configure kubectl to Access the EKS Cluster
To manage the EKS cluster, we need to configure kubectl
. This can be done using the AWS CLI.
Run the following command to configure your local kubectl
to communicate with the EKS cluster.
aws eks --region eu-west-2 update-kubeconfig --name eks-cluster
Then run the following command to confirm that you have access to the EKS cluster by listing Kubernetes services.
kubectl get svc
Step 8 — Deploy an Application Using Terraform (deploy-nginx.tf)
Now, let’s deploy a simple Nginx application to the EKS cluster using the Kubernetes provider in Terraform.
First, create a deploy-nginx.tf file and define the Kubernetes provider.
provider "kubernetes" {
host = aws_eks_cluster.eks_cluster.endpoint
cluster_ca_certificate = base64decode(aws_eks_cluster.eks_cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.eks_auth.token
}
data "aws_eks_cluster_auth" "eks_auth" {
name = aws_eks_cluster.eks_cluster.name
}
Then add the following configuration to deploy-nginx.tf to define the Nginx pod and expose it via a Kubernetes service:
resource "kubernetes_pod" "nginx" {
metadata {
name = "nginx"
labels = {
app = "nginx"
}
}
spec {
container {
name = "nginx"
image = "nginx:latest"
resources {
limits = {
cpu = "0.5"
memory = "512Mi"
}
requests = {
cpu = "0.25"
memory = "256Mi"
}
}
}
}
}
resource "kubernetes_service" "nginx_service" {
metadata {
name = "nginx-service"
}
spec {
selector = {
app = "nginx"
}
port {
port = 80
target_port = 80
}
type = "LoadBalancer"
}
}
- The Kubernetes Pod defines an Nginx pod in the EKS cluster.
- The Kubernetes Service exposes the Nginx pod as a LoadBalancer service so that it can be accessed publicly.
With deploy-nginx.tf properly configured, you can now deploy it to your EKS cluster using the following commands:
terraform plan
terraform apply
Once the deployment is complete, you can check that the Nginx pod and service are running in your cluster with:
kubectl get pods
kubectl get svc
Step 9 — Clean Up (Optional)
If you no longer need all these resources, you can destroy them all to avoid unnecessary charges. To do that, run the following command:
terraform destroy
In summary, we now have a working EKS cluster on AWS managed through Terraform. We set up the necessary VPC, created an EKS cluster with worker nodes, configured kubectl, and deployed an Nginx application.
If you’ve found this article helpful, please leave a like or a comment. If you have any questions, please let me know in the comment section.
Top comments (0)