In this serie of blog posts, we will be building a highly available application on top of AWS cloud services.
We will be using Terraform to provision and deploy our infrastructure.
By the end of this serie, we will have the following architecture deployed
Infrastructure Setup
For the first part of this application, we will be deploying the public feature as displayed in the below image.
We will have multiple resources to deploy:
- VPC
- 2 Public Subnets
- Internet Gateway
- Route Table
- Application Load Balancer
- 2 EC2 instances
You can check out the source code for this part here
Terraform Resources
Under the terraform folder in the GitHub repository, you will notice couple of files:
- vpc.tf -> creates the vpc, public subnets, internet gateway, security group, route table
- lb.tf -> creates the application load balancer, the listener, and the target group
- ec2.tf -> creates the compute instances
- main.tf -> declares the providers to use (only the aws provider for now)
- variables.tf -> declares the variables used in the different resources
Creating the infrastructure
Our current infrastructure will consist of a vpc resource named main that is declared with a cidr block of "10.0.0.0/16".
resource "aws_vpc" "main" {
cidr_block = var.main_cidr_block
enable_dns_hostnames = true
enable_dns_support = true
tags = {
Name = "main"
}
}
We will have 2 public subnets in different availability zones (to achieve a highly available architecture)
# we are looping over the number of subnets we have and creating public subnets accordingly
resource "aws_subnet" "public_subnets" {
count = length(var.public_cidr_blocks)
vpc_id = aws_vpc.main.id
cidr_block = var.public_cidr_blocks[count.index]
availability_zone = data.aws_availability_zones.available.names[count.index]
map_public_ip_on_launch = true
tags = {
Name = "public_subnet_${count.index + 1}"
}
}
We need need to create a security group that allows HTTP traffic in and out of instances
resource "aws_security_group" "main_sg" {
name = "allow_connection"
description = "Allow HTTP"
vpc_id = aws_vpc.main.id
ingress {
description = "HTTP from anywhere"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
tags = {
Name = "allow_http"
}
}
Since our VPC will need to connect to the internet, we will need to create an Internet Gateway and attache it to our freshly created VPC as follows
resource "aws_internet_gateway" "gw" {
vpc_id = aws_vpc.main.id
tags = {
Name = "main"
}
}
We will also create a route table and attach our public subnets to it, so we will have a route from these subnets to the internet
resource "aws_route_table" "public_route" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.gw.id
}
tags = {
Name = "public_route"
}
}
# we are creating two associations one for each subnet
resource "aws_route_table_association" "public_route_association" {
count = length(var.public_cidr_blocks)
subnet_id = aws_subnet.public_subnets[count.index].id
route_table_id = aws_route_table.public_route.id
}
Creating the Application Load Balancer
For this part you can refer to the lb.tf file. This file will create an application load balancer named "front-end-lb"
# you can see here that we are referring to the security group and subnets that we have created earlier
resource "aws_lb" "front_end" {
name = "front-end-lb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.main_sg.id]
subnets = aws_subnet.public_subnets.*.id
enable_deletion_protection = false
}
The rest of the file, create, a load balancer rule, a target group on port 80, and a target group attachement, that attaches the instance we will create to the load balancer target group
resource "aws_lb_listener" "front_end" {
load_balancer_arn = aws_lb.front_end.arn
port = "80"
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.front_end.arn
}
}
resource "aws_lb_target_group" "front_end" {
name = "front-end-lb-tg"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.main.id
}
resource "aws_lb_target_group_attachment" "front_end" {
count = length(aws_subnet.public_subnets)
target_group_arn = aws_lb_target_group.front_end.arn
target_id = aws_instance.front_end[count.index].id
port = 80
}
Creating the Compute instances
The last part here is the EC2 instances that will be provisioned when terraform runs the ec2.tf file. This will create 2 instances in the public subnets, and will have the security group that allows http traffic attached to them
resource "aws_instance" "front_end" {
count = length(aws_subnet.public_subnets)
ami = data.aws_ami.amazon_linux_2.id
instance_type = "t2.nano"
associate_public_ip_address = true
subnet_id = aws_subnet.public_subnets[count.index].id
vpc_security_group_ids = [
aws_security_group.main_sg.id,
]
user_data = <<-EOF
#!/bin/bash
sudo su
yum update -y
yum install -y httpd.x86_64
systemctl start httpd.service
systemctl enable httpd.service
echo “Hello World from $(hostname -f)” > /var/www/html/index.html
EOF
tags = {
Name = "HelloWorld_${count.index + 1}"
}
}
Deploying the infrastructure + application
Please make sure to have terraform installed, and have AWS Credentials configured locally.
Navigate to the terraform folder in your terminal and run:
terraform init
This will initialise the backend, and install the aws plugin and prepare terraform.
You should see the following output:
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/aws versions matching "~> 4.5.0"...
- Installing hashicorp/aws v4.5.0...
- Installed hashicorp/aws v4.5.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Now you can run a terraform speculative plan to have an overall view of what will be created.
We will skip this part and directly run
terraform apply
You will be prompted to approve the changes. Type yes
It will take a couple of minutes to have everything ready. If all goes well terraform will exit without an error and you will see something like the following
.........
Apply complete! Resources: 15 added, 0 changed, 0 destroyed.
Outputs:
lb_dns_url = "front-end-lb-*********.us-east-1.elb.amazonaws.com"
Finally run terraform output
to print the outputs that are declared in outputs.tf
You should see the following
lb_dns_url = "front-end-lb-1873116014.us-east-1.elb.amazonaws.com"
Pasting the url in the browser, you should see something like this
“Hello World from ip-10-0-1-68.ec2.internal”
When we refresh the page, we should hit another instance, and see the hostname of the second EC2 instance (if not do a hard refresh).
Destroying the infrastructure
Keeping in mind that some of the services will incur some charges, don't forget to clean up the environment, you can do so by running terraform apply -destroy -auto-approve
Summary
In this post, we created the infrastructure + public part of our application. We created a main VPC, with 2 Public Subnets, an Internet Gateway, a Load Balancer, and 2 Compute Instances.
We saw how to provision and destroy our application using terraform.
In the next blog, we will be deploying private part of the infrastructure, along with some refactoring for our terraform code (example: using modules). Stay tuned, and I hope you have enjoyed Part 1.
Feel free to comment and leave your thoughts 🙏🏻!
Top comments (1)
Hey
Here a similar post using CDK and a serverless backend:
dev.to/aws-builders/building-a-flu...