DEV Community

Cover image for Introducing the architecture & Deploying the public side

Posted on • Updated on

Introducing the architecture & Deploying the public side

In this serie of blog posts, we will be building a highly available application on top of AWS cloud services.

We will be using Terraform to provision and deploy our infrastructure.

By the end of this serie, we will have the following architecture deployed

Image description

Infrastructure Setup

For the first part of this application, we will be deploying the public feature as displayed in the below image.

Image description

We will have multiple resources to deploy:

  • VPC
  • 2 Public Subnets
  • Internet Gateway
  • Route Table
  • Application Load Balancer
  • 2 EC2 instances

You can check out the source code for this part here

Terraform Resources

Under the terraform folder in the GitHub repository, you will notice couple of files:

  • -> creates the vpc, public subnets, internet gateway, security group, route table
  • -> creates the application load balancer, the listener, and the target group
  • -> creates the compute instances
  • -> declares the providers to use (only the aws provider for now)
  • -> declares the variables used in the different resources

Creating the infrastructure

Our current infrastructure will consist of a vpc resource named main that is declared with a cidr block of "".

resource "aws_vpc" "main" {
  cidr_block = var.main_cidr_block

  enable_dns_hostnames = true
  enable_dns_support   = true
  tags = {
    Name = "main"
Enter fullscreen mode Exit fullscreen mode

We will have 2 public subnets in different availability zones (to achieve a highly available architecture)

# we are looping over the number of subnets we have and creating public subnets accordingly
resource "aws_subnet" "public_subnets" {
  count                   = length(var.public_cidr_blocks)
  vpc_id                  =
  cidr_block              = var.public_cidr_blocks[count.index]
  availability_zone       = data.aws_availability_zones.available.names[count.index]
  map_public_ip_on_launch = true

  tags = {
    Name = "public_subnet_${count.index + 1}"
Enter fullscreen mode Exit fullscreen mode

We need need to create a security group that allows HTTP traffic in and out of instances

resource "aws_security_group" "main_sg" {
  name        = "allow_connection"
  description = "Allow HTTP"
  vpc_id      =

  ingress {
    description      = "HTTP from anywhere"
    from_port        = 80
    to_port          = 80
    protocol         = "tcp"
    cidr_blocks      = [""]
    ipv6_cidr_blocks = ["::/0"]

  egress {
    from_port        = 0
    to_port          = 0
    protocol         = "-1"
    cidr_blocks      = [""]
    ipv6_cidr_blocks = ["::/0"]

  tags = {
    Name = "allow_http"
Enter fullscreen mode Exit fullscreen mode

Since our VPC will need to connect to the internet, we will need to create an Internet Gateway and attache it to our freshly created VPC as follows

resource "aws_internet_gateway" "gw" {
  vpc_id =

  tags = {
    Name = "main"
Enter fullscreen mode Exit fullscreen mode

We will also create a route table and attach our public subnets to it, so we will have a route from these subnets to the internet

resource "aws_route_table" "public_route" {
  vpc_id =

  route {
    cidr_block = ""
    gateway_id =

  tags = {
    Name = "public_route"

# we are creating two associations one for each subnet
resource "aws_route_table_association" "public_route_association" {
  count          = length(var.public_cidr_blocks)
  subnet_id      = aws_subnet.public_subnets[count.index].id
  route_table_id =
Enter fullscreen mode Exit fullscreen mode

Creating the Application Load Balancer

For this part you can refer to the file. This file will create an application load balancer named "front-end-lb"

# you can see here that we are referring to the security group and subnets that we have created earlier
resource "aws_lb" "front_end" {
  name               = "front-end-lb"
  internal           = false
  load_balancer_type = "application"
  security_groups    = []
  subnets            = aws_subnet.public_subnets.*.id

  enable_deletion_protection = false
Enter fullscreen mode Exit fullscreen mode

The rest of the file, create, a load balancer rule, a target group on port 80, and a target group attachement, that attaches the instance we will create to the load balancer target group

resource "aws_lb_listener" "front_end" {
  load_balancer_arn = aws_lb.front_end.arn
  port              = "80"
  protocol          = "HTTP"

  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.front_end.arn

resource "aws_lb_target_group" "front_end" {
  name     = "front-end-lb-tg"
  port     = 80
  protocol = "HTTP"
  vpc_id   =

resource "aws_lb_target_group_attachment" "front_end" {
  count            = length(aws_subnet.public_subnets)
  target_group_arn = aws_lb_target_group.front_end.arn
  target_id        = aws_instance.front_end[count.index].id
  port             = 80

Enter fullscreen mode Exit fullscreen mode

Creating the Compute instances

The last part here is the EC2 instances that will be provisioned when terraform runs the file. This will create 2 instances in the public subnets, and will have the security group that allows http traffic attached to them

resource "aws_instance" "front_end" {
  count                       = length(aws_subnet.public_subnets)
  ami                         =
  instance_type               = "t2.nano"
  associate_public_ip_address = true
  subnet_id                   = aws_subnet.public_subnets[count.index].id

  vpc_security_group_ids = [,

  user_data = <<-EOF
    sudo su
    yum update -y
    yum install -y httpd.x86_64
    systemctl start httpd.service
    systemctl enable httpd.service
    echo “Hello World from $(hostname -f)” > /var/www/html/index.html

  tags = {
    Name = "HelloWorld_${count.index + 1}"
Enter fullscreen mode Exit fullscreen mode

Deploying the infrastructure + application

Please make sure to have terraform installed, and have AWS Credentials configured locally.

Navigate to the terraform folder in your terminal and run:

terraform init
Enter fullscreen mode Exit fullscreen mode

This will initialise the backend, and install the aws plugin and prepare terraform.

You should see the following output:

Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/aws versions matching "~> 4.5.0"...
- Installing hashicorp/aws v4.5.0...
- Installed hashicorp/aws v4.5.0 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Enter fullscreen mode Exit fullscreen mode

Now you can run a terraform speculative plan to have an overall view of what will be created.

We will skip this part and directly run

terraform apply
Enter fullscreen mode Exit fullscreen mode

You will be prompted to approve the changes. Type yes

It will take a couple of minutes to have everything ready. If all goes well terraform will exit without an error and you will see something like the following


Apply complete! Resources: 15 added, 0 changed, 0 destroyed.


lb_dns_url = "front-end-lb-*********"
Enter fullscreen mode Exit fullscreen mode

Finally run terraform output to print the outputs that are declared in You should see the following

lb_dns_url = ""
Enter fullscreen mode Exit fullscreen mode

Pasting the url in the browser, you should see something like this

“Hello World from ip-10-0-1-68.ec2.internal”
Enter fullscreen mode Exit fullscreen mode

When we refresh the page, we should hit another instance, and see the hostname of the second EC2 instance (if not do a hard refresh).

Destroying the infrastructure

Keeping in mind that some of the services will incur some charges, don't forget to clean up the environment, you can do so by running terraform apply -destroy -auto-approve


In this post, we created the infrastructure + public part of our application. We created a main VPC, with 2 Public Subnets, an Internet Gateway, a Load Balancer, and 2 Compute Instances.
We saw how to provision and destroy our application using terraform.

In the next blog, we will be deploying private part of the infrastructure, along with some refactoring for our terraform code (example: using modules). Stay tuned, and I hope you have enjoyed Part 1.

Feel free to comment and leave your thoughts 🙏🏻!

Top comments (1)

lockhead profile image
Johannes Koch


Here a similar post using CDK and a serverless backend: