Deploying a Highly Available Web App on AWS Using Terraform
Deploying a highly available web application on AWS is a crucial step in ensuring that your application can handle traffic, scale with demand, and remain resilient during failures. Terraform, as an Infrastructure as Code (IaC) tool, simplifies this process by enabling you to define and manage your infrastructure using code. In this article, we'll explore how to deploy a highly available web app on AWS using Terraform while adhering to the DRY (Don't Repeat Yourself) principle.
Why is High Availability Important?
High availability (HA) ensures that your application remains accessible even during failures or traffic spikes. This is achieved by distributing resources across multiple Availability Zones (AZs) and implementing mechanisms such as load balancing and auto-scaling.
Prerequisites
Before we begin, make sure you have the following:
- AWS Account: An active AWS account with sufficient permissions.
- Terraform Installed: Ensure Terraform is installed on your local machine. You can follow the installation guide here.
- AWS CLI Installed and Configured: Set up the AWS CLI with your credentials. You can follow the installation guide here.
Step 1: Define Variables
Instead of hardcoding values throughout your Terraform configuration, we'll use variables to define reusable parameters. Create a file named variables.tf
:
variable "region" {
description = "AWS region"
default = "us-east-1"
}
variable "instance_type" {
description = "EC2 instance type"
default = "t2.micro"
}
variable "ami_id" {
description = "Amazon Machine Image ID"
default = "ami-066784287e358dad1" # Replace with your preferred AMI
}
variable "key_name" {
description = "Name of the EC2 key pair"
}
variable "vpc_id" {
description = "VPC ID"
}
variable "subnet_ids" {
description = "List of Subnet IDs"
type = list(string)
}
These variables will be used across different resources to avoid repetition and allow easy modifications. For example, changing the instance type or region in the future will only require updating the variable value.
Step 2: Create a Reusable Module for EC2 Instances
Modules in Terraform allow you to group and reuse resources. Let’s create a module for EC2 instances that can be reused for different parts of the infrastructure. Create a directory named modules/ec2_instance
and add the following files:
main.tf
(Inside modules/ec2_instance
)
resource "aws_instance" "this" {
ami = var.ami_id
instance_type = var.instance_type
key_name = var.key_name
subnet_id = var.subnet_id
vpc_security_group_ids = var.security_group_ids
user_data = <<-EOF
#!/bin/bash
sudo yum update -y
sudo yum install -y httpd
sudo systemctl start httpd
sudo systemctl enable httpd
echo "Hello from $(hostname -f)" > /var/www/html/index.html
EOF
tags = var.tags
}
variables.tf
(Inside modules/ec2_instance
)
variable "ami_id" {
description = "AMI ID for the EC2 instance"
}
variable "instance_type" {
description = "Instance type"
}
variable "key_name" {
description = "Name of the key pair"
}
variable "subnet_id" {
description = "Subnet ID"
}
variable "security_group_ids" {
description = "List of security group IDs"
type = list(string)
}
variable "tags" {
description = "Tags for the instance"
type = map(string)
}
Now, this module can be reused for deploying EC2 instances across different AZs, which is key for high availability.
Step 3: Create a VPC and Networking Resources
To ensure high availability, we need to set up a Virtual Private Cloud (VPC) with multiple subnets across different Availability Zones. Create a file named network.tf
:
provider "aws" {
region = var.region
}
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "main-vpc"
}
}
resource "aws_subnet" "subnet_a" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.1.0/24"
availability_zone = "${var.region}a"
tags = {
Name = "subnet-a"
}
}
resource "aws_subnet" "subnet_b" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.2.0/24"
availability_zone = "${var.region}b"
tags = {
Name = "subnet-b"
}
}
resource "aws_internet_gateway" "gw" {
vpc_id = aws_vpc.main.id
tags = {
Name = "main-gateway"
}
}
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.gw.id
}
tags = {
Name = "public-route-table"
}
}
resource "aws_route_table_association" "a" {
subnet_id = aws_subnet.subnet_a.id
route_table_id = aws_route_table.public.id
}
resource "aws_route_table_association" "b" {
subnet_id = aws_subnet.subnet_b.id
route_table_id = aws_route_table.public.id
}
This configuration creates a VPC, subnets across different AZs, an Internet Gateway, and a route table to ensure public access to the instances.
Step 4: Deploy Highly Available EC2 Instances
Now, let's deploy EC2 instances across multiple subnets using the module we created. In your main Terraform configuration file (main.tf
), include the following:
module "web_server_a" {
source = "./modules/ec2_instance"
ami_id = var.ami_id
instance_type = var.instance_type
key_name = var.key_name
subnet_id = aws_subnet.subnet_a.id
security_group_ids = [aws_security_group.web_sg.id]
tags = {
Name = "web-server-a"
}
}
module "web_server_b" {
source = "./modules/ec2_instance"
ami_id = var.ami_id
instance_type = var.instance_type
key_name = var.key_name
subnet_id = aws_subnet.subnet_b.id
security_group_ids = [aws_security_group.web_sg.id]
tags = {
Name = "web-server-b"
}
}
Here, we’re deploying two EC2 instances across two different subnets (Availability Zones) using the same module, which adheres to the DRY principle.
Step 5: Set Up a Load Balancer
To achieve high availability, you need to distribute traffic between these instances. We’ll use an Application Load Balancer (ALB) to manage incoming traffic. Add the following to main.tf
:
resource "aws_lb" "app_lb" {
name = "app-lb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.web_sg.id]
subnets = [aws_subnet.subnet_a.id, aws_subnet.subnet_b.id]
tags = {
Name = "app-lb"
}
}
resource "aws_lb_target_group" "app_tg" {
name = "app-tg"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.main.id
health_check {
path = "/"
port = "80"
}
tags = {
Name = "app-tg"
}
}
resource "aws_lb_listener" "app_listener" {
load_balancer_arn = aws_lb.app_lb.arn
port = "80"
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.app_tg.arn
}
tags = {
Name = "app-listener"
}
}
resource "aws_lb_target_group_attachment" "a" {
target_group_arn = aws_lb_target_group.app_tg.arn
target_id = module.web_server_a.aws_instance.this.id
port = 80
}
resource "aws_lb_target_group_attachment" "b" {
target_group_arn = aws_lb_target_group.app_tg.arn
target_id = module.web_server_b.aws_instance
.this.id
port = 80
}
Explanation:
-
Load Balancer: The
aws_lb
resource creates an Application Load Balancer that distributes traffic across multiple EC2 instances. -
Target Group: The
aws_lb_target_group
resource defines a group of targets (EC2 instances) that the load balancer directs traffic to. -
Listener: The
aws_lb_listener
resource listens for incoming traffic on port 80 and forwards it to the target group. -
Target Group Attachments: The
aws_lb_target_group_attachment
resources associate the EC2 instances (deployed in different subnets) with the target group.
Step 6: Security Groups
To allow traffic, you need to define security groups. Here’s an example that permits HTTP and SSH traffic:
resource "aws_security_group" "web_sg" {
name = "web-sg"
description = "Allow HTTP and SSH traffic"
vpc_id = aws_vpc.main.id
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "web-sg"
}
}
Step 7: Initialize and Deploy
Once your configuration files are in place, follow these steps to deploy your infrastructure:
- Initialize Terraform:
terraform init
- Review and Apply the Configuration:
terraform apply --auto-approve
Terraform will now create the infrastructure, including the VPC, subnets, EC2 instances, load balancer, and security groups.
Step 8: Verify the Deployment
After the deployment is complete, Terraform will output the public DNS of your load balancer. You can access your web application by visiting the load balancer’s DNS name in your browser:
http://<load_balancer_dns_name>
You should see a message like "Hello from hostname
" indicating that your highly available web app is up and running.
Step 9: Clean Up Resources
When you're done testing, make sure to clean up the resources to avoid incurring unnecessary costs:
terraform destroy --auto-approve
Conclusion
Deploying a highly available web application on AWS using Terraform is a powerful approach to ensuring scalability and resilience. By applying the DRY principle, you reduce redundancy in your Terraform code, making it more manageable and easier to modify. This guide has walked you through creating a VPC, subnets across multiple availability zones, EC2 instances using reusable modules, and an Application Load Balancer to distribute traffic. With Terraform's automation capabilities, your infrastructure is ready to handle production-level workloads efficiently.
Top comments (0)