Terraform v15 was released on April 14th.
On this post I will use the following resources:
· Provision an EKS Cluster (AWS)
· Terraform v15.0
· Terraform Registry
· Pre-Commit
· Terraform Pre-commit
· Terraform-docs
· Tflint
· Tfsec
This is based on the Hashicorp Learning Tutorial "Provision an EKS Cluster (AWS)". I took that tutorial as a base project for this post, then I had tweaked it a little bit. I had set up some variables, modified some module and providers versions and changed some of the "list" and "map" functions. Also, I added some pre-commit hooks based on tflint, tfsec and terraform-docs. Let's start!
Before starting, I always make some resource inventory/manifest, more like a shopping cart, that helps me on what should I deploy for each cloud project. So here is the "inventory":
1 x Amazon VPC
6 x Amazon Subnet (3 x Public + 3 x Private)
3 x Amazon EC2
1 x Amazon EKS
1 x Kubernetes AWS-Auth policy
First thing to do, is to init a git repository. Mine is this one. Then we must create some folder and files there:
CMD> mkdir terraform
CMD> touch terraform/{main,outputs,variables,versions}.tf terraform/README.md
To enable pre-commit, we need to create a .pre-commit-config.yaml file that will contain relative config and install pre-commit in the project:
CMD> echo 'repos:
- repo: git://github.com/antonbabenko/pre-commit-terraform
rev: master
hooks:
- id: terraform_fmt
- id: terraform_validate
- id: terraform_docs
- id: terraform_docs_without_aggregate_type_defaults
- id: terraform_tflint
args:
- 'args=--enable-rule=terraform_documented_variables'
- id: terraform_tfsec
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: master
hooks:
- id: check-merge-conflict
- id: end-of-file-fixer' > .pre-commit-config.yaml
CMD> pre-commit install
pre-commit installed at .git/hooks/pre-commit
Project folder hierarchy should look like this:
CMD> tree
.
├── README.md
└── terraform
├── main.tf
├── outputs.tf
├── README.md
├── variables.tf
└── versions.tf
Let's start setting up "versions.tf" file. This file contains our relative provider versions, it's better to close this versions to avoid some misconfigurations in the future updates:
# Providers version
# Ref. https://www.terraform.io/docs/configuration/providers.html
terraform {
required_version = "~>0.15"
required_providers {
# Base Providers
random = {
source = "hashicorp/random"
version = "3.1.0"
}
null = {
source = "hashicorp/null"
version = "3.1.0"
}
local = {
source = "hashicorp/local"
version = "2.1.0"
}
template = {
source = "hashicorp/template"
version = "2.2.0"
}
# AWS Provider
aws = {
source = "hashicorp/aws"
version = "3.37.0"
}
# Kubernetes Provider
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.1.0"
}
}
}
Then I manage to set some project variables in the "variables.tf". This will help me to make some different workspaces, environments or command line variable override:
# Common
variable "project" {
default = "cosckoya"
description = "Project name"
}
variable "environment" {
default = "laboratory"
description = "Environment name"
}
# Amazon
variable "region" {
default = "us-east-1"
description = "AWS region"
}
variable "vpc_cidr" {
type = string
default = "10.0.0.0/16"
description = "AWS VPC CIDR"
}
variable "public_subnets_cidr" {
type = list(any)
default = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
description = "AWS Public Subnets"
}
variable "private_subnets_cidr" {
type = list(any)
default = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
description = "AWS Private Subnets"
}
This is my custom "main.tf" file. Here I changed the "locals" to set some tags as a "tomap(..)" function, update d the modules to the last version and also updated the Kubernetes version to 1.19. Just to test and have fun.
[...]
locals {
cluster_name = "${var.project}-${var.environment}-eks"
tags = tomap({"Environment" = var.environment, "project" = var.project})
}
[...]
## Amazon Networking
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "2.78.0"
name = "${var.project}-${var.environment}-vpc"
cidr = var.vpc_cidr
azs = data.aws_availability_zones.available.names
private_subnets = var.public_subnets_cidr
public_subnets = var.private_subnets_cidr
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
}
public_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/elb" = "1"
}
private_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = "1"
}
}
[...]
I didn't touch any of the "outputs.tf".
Check these git repositories for more information:
· Learn Terraform - Provision an EKS Cluster
· Cosckoya's AWS Terraform Laboratory
Time to have fun now. Let's play with this:
· Initialize the project
CMD> terraform init
In here we should test the pre-commit rules that we had set up, take note of every Tfsec error about the security compliance. Try to resolve each or comment them with this docs
Add these lines to the "terraform" README.md:
<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
Running Pre-Commit, our config will create a basic document with a lot of interesting information about our Terraform project
CMD> pre-commit run -a
Terraform fmt............................................................Passed
Terraform validate.......................................................Passed
Terraform docs...........................................................Passed
Terraform docs (without aggregate type defaults).........................Passed
Terraform validate with tflint...........................................Passed
Check for merge conflicts................................................Passed
Fix End of Files.........................................................Passed
And check that README.md file:
CMD> cat terraform/README.md
[...]
## Requirements
[...]
## Providers
[...]
## Modules
[...]
## Resources
[...]
## Inputs
[...]
## Outputs
Mine is available here
Now it's time to have fun!
· Plan the project
CMD> terraform plan
· Deploy the project
CMD> terraform apply
· Connect to the cluster and enjoy!
CMD> aws eks --region $(terraform output -raw region) update-kubeconfig --name $(terraform output -raw cluster_name)
Running some basic commands we can see that the cluster is up and running:
CMD> kubectl cluster-info
Kubernetes control plane is running at https://<SOME-BIG-HASH>.us-east-1.eks.amazonaws.com
CoreDNS is running at https://<SOME-BIG-HASH>.us-east-1.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
CMD> kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-2-138.ec2.internal Ready <none> 26m v1.19.6-eks-d1db3c
ip-10-0-2-88.ec2.internal Ready <none> 26m v1.19.6-eks-d1db3c
ip-10-0-3-68.ec2.internal Ready <none> 26m v1.19.6-eks-d1db3c
Enjoy!
Ps. As you could see this is so similar to the AWS Terraform Learn page. Little tweaks to test some changes between versions.
I'm a very big fan of @antonbabenko work. I recommend everyone to follow him.
Top comments (0)