DEV Community

Cover image for 
How To Use Terraform like a Pro: Part 2
We are IOD for IOD Cloud Tech Research Ltd.

Posted on • Edited on

How To Use Terraform like a Pro: Part 2

In the previous post in this two-part series, I discussed what Terraform is and the features it supports. In this post, I’ll explore some use cases to show you how to get the most out of Terraform, simplifying your DevOps environment.

Multi-Tier Applications

Multi-tier architecture is the most common pattern for building systems. In this architecture, you generally use a two or three tier structure. In a two-tier structure, the first tier has a cluster of web servers, and the second tier is a pool of different databases used by the first tier’s servers. With more complicated systems requiring API servers, caching, middleware, event buses, and so on, you can add the third tier.

With Terraform, each tier can be segregated as a collection of resources, and you can create dependencies between them using Terraform configuration. This ensures that your databases and middleware are ready before you provision your API and web servers. Terraform’s advantage is that it brings scalability and resilience into the system, as each tier can be scaled automatically using configuration.

Platform-as-a-Service (PaaS) Setup

PaaS is a great choice if you don’t want to invest too much in building skills in infrastructure. Platforms like Cloud Foundry and Red Hat OpenShift are widely used and are being deployed on AWS, GCP, Azure, and other cloud platforms.

With Terraform, the platforms are enabled to scale based on demand. These platforms need regular patching, upgrades, re-configurations, and extension support, and can be enabled using Terraform configuration.

Multi-Cloud Deployment

Due to compliance requirements and/or the need to avoid vendor lock-in, many organizations have started implementing multi-cloud deployment, which helps increase availability, fault tolerance, and system resiliency.

To support infrastructure as code (IaC), each cloud vendor provides its own configuration tools. However, these tools are cloud specific. That’s where Terraform comes into play—you can use it to support multi-cloud deployments. It also provides support to multiple cloud providers and simplifies the orchestration between each provider’s resources.

Multi-Repo Environment Setup

For simple small projects, one Terraform main configuration file in a single directory is a good place to start. However, it will become a monolith over time as resources increase. Also, you’ll need to have multiple environments to support deploying applications.

Terraform, however, offers several options, such as directories and workspaces to modularize your configuration so that you can manage it smoothly. You can also separate the directories for each environment, which ensures that you only touch the intended infrastructure. For example, making changes to the Dev environment won’t impact the QA or Prod environments. However, this option duplicates the Terraform code and is useful only if your deployment requirements are different for each environment.

If you want to reuse the Terraform code with different environment parameters, workspace-separated environments are a better option. In this case, you will have a separate state file for each environment.

Multi-Cloud Setup with Terraform

Now that I’ve reviewed a few Terraform use cases, I’ll explore some of them in greater detail to show you how they can be implemented. First, I’ll dive deep into the multi-cloud setup configuration using Terraform.

Let’s take a simple example: an httpd server being installed in AWS and Azure on a CentOS 8. Here, the same httpd server is getting deployed in multiple clouds using Terraform.

Step 1: Create a Common Variable Configuration

To start, create a common configuration file namedcommon-variables.tf.

This file has all the variables shared among other modules. The configuration looks like this:

#Environment
variable "application_env" {
  type = string
  description = "Application environment like Dev, QA or Prod"
  default = "dev"
}

#Application name
variable "application_name" {
  type = string
  description = "Name of the Application"
  default = "multiclouddemo"
}
Enter fullscreen mode Exit fullscreen mode

Step 2: Terraform Configuration for Httpd Server on AWS

Now, create a Terraform file that has configuration for httpd server on a CentOS EC2 instance.

Define a variable file for AWS authentication, AZ, VPC, and CIDR.

#variables.tf
#for brevity, not putting authentication related variables.

#AWS Region
variable "region" {
  type = string
  description = "AWS Region for the VPC"
  default = "ap-southeast-1"
}
#AWS Availability Zone
variable "az" {
  type = string
  description = "AWS AZ"
  default = "ap-southeast-1a"
}
#VPC CIDR
variable "vpc_cidr" {
  type = string
  description = "VPC CIDR"
  default = "10.2.0.0/16"
}
#Subnet CIDR
variable "subnet_cidr" {
  type = string
  description = "Subnet CIDR"
  default = "10.2.1.0/24"
}
Enter fullscreen mode Exit fullscreen mode

Next, create a shell script that installs the httpd server.

#! /bin/bash
sudo apt-get update
sudo apt-get install -y apache2
sudo systemctl start apache2
sudo systemctl enable apache2
echo "<h1>Deployment on AWS</h1>" | sudo tee /var/www/html/index.html
Enter fullscreen mode Exit fullscreen mode

Then create all the resources in the main Terraform file.

#for brevity, not putting each and every parameter name. Only keeping the ones that are relevant for the article.

#main.tf
#Initialize the AWS Provider
provider "aws" {
 ---
}
#VPC definition
resource "aws_vpc" "aws-vpc" {
  ----
}
#subnet definition
resource "aws_subnet" "aws-subnet" {
  ---
}
#Define the internet gateway
#Define the route table to the internet
#Assign the public route table to the subnet
#Define the security group for HTTP web server

#Centos 8 AMI
data "aws_ami" "centos_8" {
  most_recent = true
  owners = ["02342412312"]
  filter {
    name = "name"
    values = ["centos/images/hvm-ssd/centos-8.03-amd64-
      server-*"]
  }
  filter {
    name = "virtualization-type"
    values = ["hvm"]
  }
}
#Define Elastic IP for web server
resource "aws_eip" "aws-web-eip" {
  ----
}
# EC2 Instances
resource "aws_instance" "aws-web-server" {
  ami = data.aws_ami.centos_8.id
  instance_type = "t3.micro"
  subnet_id = aws_subnet.aws-subnet.id
  vpc_security_group_ids = [aws_security_group.aws-web-sg.id]
  associate_public_ip_address = true
  source_dest_check = false
  key_name = var.aws_key_pair
  user_data = file("aws-data.sh")
  tags = {
    Name = "${var.application_name}-${var.application_env}-web-server"
    Env = var.application_env
  }
}
#Define Elastic IP
Enter fullscreen mode Exit fullscreen mode

Step 3: Terraform Configuration for Httpd Server on Azure

Similar to what I just showed you for AWS, you now need to define variables for Azure authentication and resources:

#Azure authentication variables

#Location Resource Group
variable "rg_location" {
  type = string
  description = "Location of Resource Group"
  default = "South East"
}
#Virtual Network CIDR
variable "vnet_cidr" {
  type = string
  description = "Vnet CIDR"
  default = "10.3.0.0/16"
}
#Subnet CIDR
variable "subnet_cidr" {
  type = string
  description = "Subnet CIDR"
  default = "10.4.1.0/24"
}
# Define centos linux User related variables
Enter fullscreen mode Exit fullscreen mode

Next, create a shell script, similar to the AWS one, which installs the httpd server on Azure with a different message:

#! /bin/bash
sudo apt-get update
sudo apt-get install -y apache2
sudo systemctl start apache2
sudo systemctl enable apache2
echo "<h1>Deployment on Azure</h1>" | sudo tee /var/www/html/index.html
Enter fullscreen mode Exit fullscreen mode

Then create all the Azure resources in the main Terraform file:

#main.tf

#for brevity, not putting each and every parameter names.Only keeping the one is relevant for the article.
#Configure the Azure Provider
provider "azurerm" {
  --
}
#Define Resource Group
resource "azurerm_resource_group" "azure-resource_grp" {
  --
}
#Define a virtual network
resource "azurerm_virtual_network" "azure-vnet" {
  --
}
#Define a subnet
resource "azurerm_subnet" "azure-subnet" {
  ---
}
#Create Security Group to access Web Server
resource "azurerm_network_security_group" "azure-web-nsg" {
  ---
}
#Associate the Web NSG with the subnet
resource "azurerm_subnet_network_security_group_association" "azure-web-nsg-association" {
  ---
}
#Get a Static Public IP
resource "azurerm_public_ip" "azure-web-ip" {
  ---
}
#Create Network Card for Web Server VM
resource "azurerm_network_interface" "azure-web-nic" {
  ---
}
#Create web server vm
resource "azurerm_virtual_machine" "azure-web-vm" {
  name = "${var.application_name}-${var.application_env}-web-vm"
  location = azurerm_resource_group.azure-resource_grp.location
  resource_group_name = azurerm_resource_group.azure-resource_grp.name
  network_interface_ids = [azurerm_network_interface.azure-web-
    nic.id]

  storage_image_reference {
   ---
  }
  tags = {
    environment = var.application_env
  }
}
#Output
output "azure-web-server-external-ip" {
  value = azurerm_public_ip.azure-web-ip.ip_address
}
Enter fullscreen mode Exit fullscreen mode

Now the Terraform configuration is ready for both AWS and Azure. You can run the following commands to create the multi-cloud application using Terraform:

$ terraform init 
$ terraform apply
Enter fullscreen mode Exit fullscreen mode

There is one additional configuration for distributing the traffic to both AWS and Azure using the same URL. For that, you can use Amazon Route 53 or Cloudflare.

Multi-Repo Environment Application

Earlier, I briefly discussed how you can use directories and workspaces to support multi- repository applications. I’ll now explore how to implement workspaces to reuse the same Terraform configuration for multiple environments.

Terraform configurations generally have a default workspace. You can check this by running the following command:

$ terraform workspace list
   * default
Enter fullscreen mode Exit fullscreen mode

Note: * means the current workspace.

Step 1: Create Variables

Start with defining a file named variables.tf:

variable "aws_region" {
  description = "AWS region where our web application will be deployed."
}

variable "env_prefix" {
  description = "Environment like dev, qa or prod"
}
Enter fullscreen mode Exit fullscreen mode

Step 2: Define Main Configuration

Next, define a main.tf configuration defining all the resources required for a small web application:

#main.tf
provider "aws" {
  region = var.region
}

resource "random_country" "countryname" {
  length    = 20
  separator = "-"
}

resource "aws_s3_bucket" "bucket" {
  bucket = "${var.env_prefix}-${random_country.countryname.id}"
  acl    = "public-read"

  policy = <<EOF
  {
    ---
  }
  EOF

  website {
    index_document = "welcome.html"
    error_document = "error.html"

  }
  force_destroy = true
}

resource "aws_s3_bucket_object" "countryapp" {
  acl          = "public-read"
  key          = "welcome.html"
  bucket       = aws_s3_bucket.bucket.id
  content      = file("${path.module}/assets/welcome.html")
  content_type = "text/html"

}
Enter fullscreen mode Exit fullscreen mode

Step 3: Define Variables for Command Line

Interface (CLI)
Now, define the dev.tfvars file:

region = "ap-southeast-1"
prefix = "dev"
Enter fullscreen mode Exit fullscreen mode

Then define the prod.tfvars file:

region = "ap-southeast-1"
prefix = "prod"
Enter fullscreen mode Exit fullscreen mode

These files can be kept in different repositories in order to isolate them. Which repository a file is kept in depends on the roles of the users who will be allowed to access them.

Step 4: Define Output File

The output file will be the same for both of the environments:

output "website_endpoint" {
  value = "http://${aws_s3_bucket.bucket.website_endpoint}/index.html"
}
Enter fullscreen mode Exit fullscreen mode

Step 5: Create Workspaces

Next, create two workspaces: one for dev and one for prod.

$ terraform workspace new dev
Enter fullscreen mode Exit fullscreen mode

Once you create the dev workspace, it will become your current workspace.

Now, initialize the directory and then apply the dev.tfvars file using the flag -var-file:

$ terraform init
$ terraform apply -var-file=dev.tfvars
Enter fullscreen mode Exit fullscreen mode

The output of this configuration will execute in the dev workspace, and the web application will launch in the browser.

You can create the prod workspace similarly by applying prod.tfvars:

$ terraform workspace new prod
$ terraform init
$ terraform apply -var-file=prod.tfvars
Enter fullscreen mode Exit fullscreen mode

Now you should be able to run the web application in a prod environment as well.

Also, your folder structure will have three repositories. The first repository will have two workspaces: dev and prod. This ensures that the state is maintained in accordance with the flag -var-file.

You’ll also notice that there is a separate state file for each environment/workspace.

Here is the structure of the three repositories:


├── README.md        

├── assets

│   └── index.html

├── main.tf

├── outputs.tf

├── terraform.tfstate.d

│   ├── dev

│   │   └── terraform.tfstate

│   ├── prod

│   │   └── terraform.tfstate

└── variables.tf

├── README.md

├── dev.tfvars

├── README.md

├── prod.tfvars
Enter fullscreen mode Exit fullscreen mode

Summary

In this article, I reviewed several use cases that show how Terraform can help DevOps processes run smoothly and enable you to maintain the infrastructure with versioning, reuse of the code, automated scalability, and much more. While I shared some of the most well-known examples, since Terraform allows the extension of its features, there are many other use cases that show how Terraform can enable IaC.


If you want to write expert-based articles like this one, join our talent network.

Top comments (0)