DEV Community

Provision an AWS EKS Cluster with Terraform

The Amazon Elastic Kubernetes Service (EKS) is the AWS service for deploying, managing, and scaling containerised applications with Kubernetes.

In this tutorial, you will deploy an EKS cluster using Terraform. Then, you will configure kubectl using Terraform output to deploy a Kubernetes dashboard on the cluster.

Warning!

AWS charges $0.10 per hour for each EKS cluster. As a result, you may be charged to run these examples. The most you should be charged should only be a few dollars, but we're not responsible for any charges that may incur.

Prerequisites

The tutorial assumes some basic familiarity with Kubernetes and kubectl but does not assume any pre-existing deployment.
It also assumes that you are familiar with the usual Terraform plan/apply workflow. If you're new to Terraform itself, refer first to the Getting Started tutorial.
For this tutorial, you will need:

  • an AWS account with the IAM permissions listed on the EKS module documentation,
  • a configured AWS CLI
  • AWS IAM Authenticator
  • kubectl
  • wget (required for the eks module)

Install kubernetes-cli

 ~/Documents/terraform/eks/learn-terraform-provision-eks-cluster  main !2  brew install kubernetes-cli                                                                                      
Running `brew update --preinstall`...
==> Auto-updated Homebrew!
Updated 2 taps (homebrew/core and homebrew/cask).
==> New Formulae
sdl12-compat
==> Updated Formulae
Updated 63 formulae.
==> New Casks
hepta                                                               poker-copilot                                                       write
==> Updated Casks
Updated 70 casks.
==> Deleted Casks
optimal-layout                                                      password-assistant                                                  pd-runner

Warning: kubernetes-cli 1.23.4 is already installed, it's just not linked.
To link this version, run:
  brew link kubernetes-cli
Enter fullscreen mode Exit fullscreen mode

Install wget

 ~/Documents/terraform/eks/learn-terraform-provision-eks-cluster  main !2   brew install wget                                                                                         
==> Downloading https://ghcr.io/v2/homebrew/core/wget/manifests/1.21.3
######################################################################## 100.0%
==> Downloading https://ghcr.io/v2/homebrew/core/wget/blobs/sha256:aa706c58ae7e97abf91be56e785335aff058c431f9973dffac06aacbea558497
==> Downloading from https://pkg-containers.githubusercontent.com/ghcr1/blobs/sha256:aa706c58ae7e97abf91be56e785335aff058c431f9973dffac06aacbea558497?se=2022-03-07T06%3A05%3A00Z&sig=qvfb5f%2FzHHNg9sx7E0Qt
######################################################################## 100.0%
==> Pouring wget--1.21.3.monterey.bottle.tar.gz
🍺  /usr/local/Cellar/wget/1.21.3: 89 files, 4.2MB
==> Running `brew cleanup wget`...
Disable this behaviour by setting HOMEBREW_NO_INSTALL_CLEANUP.
Hide these hints with HOMEBREW_NO_ENV_HINTS (see `man brew`).

Enter fullscreen mode Exit fullscreen mode

Install aws-iam-authenticator

 ~/Documents/terraform/eks/learn-terraform-provision-eks-cluster  main !2   brew install aws-iam-authenticator                                                                         
==> Downloading https://ghcr.io/v2/homebrew/core/aws-iam-authenticator/manifests/0.5.5
######################################################################## 100.0%
==> Downloading https://ghcr.io/v2/homebrew/core/aws-iam-authenticator/blobs/sha256:b4b7f41452eab334fd6be0cf72c03fe1a53ea4fbf454c16e220ca8b48b5d455c
==> Downloading from https://pkg-containers.githubusercontent.com/ghcr1/blobs/sha256:b4b7f41452eab334fd6be0cf72c03fe1a53ea4fbf454c16e220ca8b48b5d455c?se=2022-03-07T06%3A05%3A00Z&sig=u40AvFISrAV23BOV7OGTLg
######################################################################## 100.0%
==> Pouring aws-iam-authenticator--0.5.5.monterey.bottle.tar.gz
🍺  /usr/local/Cellar/aws-iam-authenticator/0.5.5: 6 files, 48.8MB
==> Running `brew cleanup aws-iam-authenticator`...
Disable this behaviour by setting HOMEBREW_NO_INSTALL_CLEANUP.
Hide these hints with HOMEBREW_NO_ENV_HINTS (see `man brew`).
Enter fullscreen mode Exit fullscreen mode

Install awscli

 ~/Documents/terraform/eks/learn-terraform-provision-eks-cluster  main !2   brew install awscli                                                                                         
awscli 2.4.21 is already installed but outdated (so it will be upgraded).
==> Downloading https://ghcr.io/v2/homebrew/core/awscli/manifests/2.4.23
######################################################################## 100.0%
==> Downloading https://ghcr.io/v2/homebrew/core/awscli/blobs/sha256:10946ed28d9f15e9d518a63444a77cc8688b497ccbe86cc95de3bc82f79a8fc3
==> Downloading from https://pkg-containers.githubusercontent.com/ghcr1/blobs/sha256:10946ed28d9f15e9d518a63444a77cc8688b497ccbe86cc95de3bc82f79a8fc3?se=2022-03-07T06%3A05%3A00Z&sig=%2FLb%2FfA0K9JA0DXLgWa
######################################################################## 100.0%
==> Upgrading awscli
  2.4.21 -> 2.4.23

==> Pouring awscli--2.4.23.monterey.bottle.tar.gz
==> Caveats
The "examples" directory has been installed to:
  /usr/local/share/awscli/examples

zsh completions and functions have been installed to:
  /usr/local/share/zsh/site-functions
==> Summary
🍺  /usr/local/Cellar/awscli/2.4.23: 12,430 files, 98.2MB
==> Running `brew cleanup awscli`...
Disable this behaviour by setting HOMEBREW_NO_INSTALL_CLEANUP.
Hide these hints with HOMEBREW_NO_ENV_HINTS (see `man brew`).
Removing: /usr/local/Cellar/awscli/2.4.21... (13,024 files, 102.5MB)
Removing: /Users/macpro/Library/Caches/Homebrew/awscli--2.4.21... (17.3MB)
Enter fullscreen mode Exit fullscreen mode

After you've installed the AWS CLI, configure it by running aws configure.

When prompted, enter your AWS Access Key ID, Secret Access Key, region and output format.

$ aws configure
AWS Access Key ID [None]: YOUR_AWS_ACCESS_KEY_ID
AWS Secret Access Key [None]: YOUR_AWS_SECRET_ACCESS_KEY
Default region name [None]: YOUR_AWS_REGION
Default output format [None]: json
Enter fullscreen mode Exit fullscreen mode

Set up and initialise your Terraform workspace

In your terminal, clone the following repository. It contains the example configuration used in this tutorial.

$ git clone https://github.com/hashicorp/learn-terraform-provision-eks-cluster

$ cd learn-terraform-provision-eks-cluster
Enter fullscreen mode Exit fullscreen mode

In here, you will find six files used to provision a VPC, security groups and an EKS cluster. The final product should be similar to this:

High-level Architecture
EKS Cluster

  1. vpc.tf provisions a VPC, subnets and availability zones using the AWS VPC Module. A new VPC is created for this tutorial so it doesn't impact your existing cloud environment and resources.
  2. security-groups.tf provisions the security groups used by the EKS cluster.
  3. eks-cluster.tf provisions all the resources (AutoScaling Groups, etc...) required to set up an EKS cluster using the AWS EKS Module.
  4. outputs.tf defines the output configuration.
  5. versions.tf sets the Terraform version to at least 0.14. It also sets versions for the providers used in this sample.

Initialise Terraform workspace

$ terraform init
Initializing modules...
Downloading terraform-aws-modules/eks/aws 13.2.1 for eks...
- eks in .terraform/modules/eks
- eks.fargate in .terraform/modules/eks/modules/fargate
- eks.node_groups in .terraform/modules/eks/modules/node_groups
Downloading terraform-aws-modules/vpc/aws 2.66.0 for vpc...
- vpc in .terraform/modules/vpc

Initializing the backend...

Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Reusing previous version of hashicorp/random from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of hashicorp/null from the dependency lock file
- Reusing previous version of hashicorp/template from the dependency lock file
- Reusing previous version of hashicorp/kubernetes from the dependency lock file
- Installing hashicorp/kubernetes v2.0.1...
- Installed hashicorp/kubernetes v2.0.1 (signed by HashiCorp)
- Installing hashicorp/aws v3.25.0...
- Installed hashicorp/aws v3.25.0 (signed by HashiCorp)
- Installing hashicorp/random v3.0.0...
- Installed hashicorp/random v3.0.0 (signed by HashiCorp)
- Installing hashicorp/local v2.0.0...
- Installed hashicorp/local v2.0.0 (signed by HashiCorp)
- Installing hashicorp/null v3.0.0...
- Installed hashicorp/null v3.0.0 (signed by HashiCorp)
- Installing hashicorp/template v2.2.0...
- Installed hashicorp/template v2.2.0 (signed by HashiCorp)

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Enter fullscreen mode Exit fullscreen mode

Provision the EKS cluster

In your initialised directory, run terraform apply and review the planned actions. Your terminal output should indicate the plan is running and what resources will be created.

$ terraform apply

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
 <= read (data resources)

Terraform will perform the following actions:

  ## ...

Plan: 53 to add, 0 to change, 0 to destroy.

  ## ...

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value:
Enter fullscreen mode Exit fullscreen mode

This terraform apply will provision a total of 53 resources (VPC, Security Groups, AutoScaling Groups, EKS Cluster, etc...). Confirm the apply with a yes.

This process should take approximately 10 minutes. Upon successful application, your terminal prints the outputs defined in outputs.tf.

Apply complete! Resources: 53 added, 0 changed, 0 destroyed.

Outputs:

cluster_endpoint = "https://B7994AFC945AB5029A4E2BA8DB39B448.gr7.us-east-2.eks.amazonaws.com"
cluster_id = "education-eks-p8Zqwv78"
cluster_name = "education-eks-p8Zqwv78"
cluster_security_group_id = "sg-09378904e5421f22e"
config_map_aws_auth = [
  {
    "binary_data" = tomap(null) /* of string */
    "data" = tomap({
      "mapAccounts" = <<-EOT
      []

      EOT
      "mapRoles" = <<-EOT
      - "groups":
        - "system:bootstrappers"
        - "system:nodes"
        "rolearn": "arn:aws:iam::561656980159:role/education-eks-p8Zqwv782021011204012796010000000c"
        "username": "system:node:{{EC2PrivateDNSName}}"

      EOT
      "mapUsers" = <<-EOT
      []

      EOT
    })
    "id" = "kube-system/aws-auth"
    "metadata" = tolist([
      {
        "annotations" = tomap(null) /* of string */
        "generate_name" = ""
        "generation" = 0
        "labels" = tomap({
          "app.kubernetes.io/managed-by" = "Terraform"
          "terraform.io/module" = "terraform-aws-modules.eks.aws"
        })
        "name" = "aws-auth"
        "namespace" = "kube-system"
        "resource_version" = "942"
        "self_link" = "/api/v1/namespaces/kube-system/configmaps/aws-auth"
        "uid" = "f328b2d0-f099-4e72-a1b1-760781045f10"
      },
    ])
  },
]
kubectl_config = ...
## ...
Enter fullscreen mode Exit fullscreen mode

Clean up your workspace

Congratulations, you have provisioned an EKS cluster in a private subnet.

Remember to destroy any resources you create once you are done with this tutorial. Run the destroy command and confirm with yes in your terminal. This will destroy all the 53 resources Terraform created.

$ terraform destroy

Enter fullscreen mode Exit fullscreen mode

Ref: https://learn.hashicorp.com/tutorials/terraform/eks

Discussion (1)

Collapse
andreidascalu profile image
Andrei Dascalu

It's nice, but a shame that your setup uses modules that are 2 years old already at the time of the post. Unfortunately now the eks module is version 18 while the version 13 used here is from 2020 and not compatible.