DEV Community

John Alcher
John Alcher

Posted on

Automated Cloud: Web Hosting Basics

Introduction

This article covers the second part of the "Automated Cloud" series where we deploy a simple web page onto an EC2 instance and have it publicly available on the internet.

The Tasks

  • Account Basics
  • Web Hosting Basics (this article)
  • Auto Scaling
  • External Data
  • Web Hosting - PaaS + S3
  • Microservices
  • Serverless
  • Continous Delivery

Task #2 - Web Hosting Basics

The steps for this task is pretty straightforward:

  1. Deploy an EC2 instance with a simple static "Coming Soon" page.
  2. Make the process repeatable
  3. Checkpoint: The HTML page served by the EC2 instances is publicly viewable.

Deploying a static page onto an EC2 instance

To start, I manually provisioned (using the AWS console) a t2.micro EC2 instance running Amazon Linux 2. I then attached two security group rules that: allow TCP connections on ports :22 and :80 to access the instance through SSH and HTTP, respectively.

Once I'm SSH'ed onto the machine, I installed [Nginx (https://www.nginx.com/) and configured a simple nginx.conf that serves a static page at the root. I have a spare domain wabby.xyz, so I thought I'd use that as the basis of the project. At this point, the web page is now publicly viewable by accessing the EC2 instance's public IP. What would the website do? Not really sure yet, lol. We'll see as I progress through the sections :)

Making it repeatable: EC2 User Scripts

As I was doing the steps above, I have a separate tab open for taking notes of the commands that I was running. Once I was able to access the EC2 instance with my browser and confirm that my index.html file is being served, I terminated it and re-created a new one, this time adding a User Data script containing the steps that I executed manually.

#!/bin/bash
# Update and install Nginx
yum update -y
amazon-linux-extras install -y nginx1

# Create the static page to be served
mkdir -p /data/www
echo "<h1>Coming Soon: Wabby-as-a-Service</h1>" > /data/www/index.html

# Create a simple nginx.conf
tee /etc/nginx/nginx.conf << EOF > /dev/null
worker_processes  1;

events {
    worker_connections  1024;
}

http {
    server {
        location / {
            root /data/www;
        }
    }
}
EOF

# Reload and run Nginx on startup
systemctl start nginx
systemctl enable nginx
nginx -s reload  
Enter fullscreen mode Exit fullscreen mode

After half an hour debugging why my script isn't working (learned a lot: HEREDOCs, the tee command, systemctl, the
/var/log/cloud-init* log files, etc.), I was able to recreate the initial instance using the script above. You do this by going to: EC2 > Launch Instance > Step 3 (Configure Instance) > Advanced Details > User Data and pasting the script above. Do note that the User Data script is ran as the root user, so the commands do not need to be prepended with sudo.

Automation

Now comes the fun part: trying out Terraform and automating the provisioning of the instances. I basically just tried to list down what I need to achieve my manual infrastructure:

The tricky part is the configuration of the provisioned instance. I was about to dive into Ansible to install the software that I need, but I realized the User Data script that I used was "good enough" for this exercise. I think I'll try to stick to this approach until I appreciate what Ansible can do for me!

Here's the Terraform script to provision the resources I listed above:

// main.tf
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 2.70"
    }
  }
}

provider "aws" {
  profile = "default"
  region  = "ap-southeast-1"
}

resource "aws_default_vpc" "default" {}

resource "aws_security_group" "wabby_sg" {
  name        = "wabby_sg"
  description = "Security group for the Wabby project."
  vpc_id      = aws_default_vpc.default.id

  ingress {
    description = "Allow SSH from everywhere."
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    description = "Allow HTTP from everywhere."
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    description = "Allow all outbound traffic."
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

resource "aws_instance" "web" {
  ami             = "ami-00b8d9cb8a7161e41"
  instance_type   = "t2.micro"
  security_groups = [aws_security_group.wabby_sg.name]
  user_data       = <<-EOT
#!/bin/bash
yum update -y
amazon-linux-extras install -y nginx1

mkdir -p /data/www
echo "<h1>Coming Soon: Wabby-as-a-Service</h1>" > /data/www/index.html

tee /etc/nginx/nginx.conf <<EOF > /dev/null
worker_processes  1;

events {
  worker_connections  1024;
}

http {
  server {
    location / {
      root /data/www;
    }
  }
}
EOF

systemctl start nginx
systemctl enable nginx
nginx -s reload
EOT

  tags = {
    Name = "wabby_web_server"
  }
}
Enter fullscreen mode Exit fullscreen mode

The code is again pretty self-explanatory. It creates an EC2 instance with the appropriate security group rules to allow SSH and HTTP access. The instance configuration is done with the User Data script we created earlier. The default VPC is used to keep things simple. If most of these are not familiar, you might want to do the Terraform tutorial to get up to speed.

I'll see you on the next part!

Top comments (0)