I've written an article in the past about how to host a static website on S3 using AWS CDK. Now, as I am learning Terraform, I wanted to repeat the same process and connect that site to a custom domain on Route 53.
In this post, we will host a static site on S3 and add appropriate Route 53 records to provide access to this website when a user loads our custom domain in the browser.
Site creation
For this demo, we will create a simple index.html page that we could host on S3 in the following section:
<html>
<head>
<title>Static Website with Terraform</title>
</head>
<body>
<b>Welcome to my static website</b>
</body>
</html>
Infrastructure setup
In this section, we will prepare our local environment and get into writing our infrastructure as code. Our infrastructure would look as follows:
When the user enters the URL in the browser, the request leaves the browser looking for the IP associated with the domain name - the DNS will point to the Hosted zone in Route 53 through its delegation system, then routes the request to the website hosted in the S3 bucket through an A record (maps hostname to IPv4 address).
NOTE Route 53 offers both registration and hosting services. If you have your domain registered with Route 53, you will have a hosted zone automatically created - deleting that hosted zone and creating a new one would cause troubles. So I would recommend caution. If your domain is registered with an external service, you could still follow this post (for more information look at: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/migrate-dns-domain-in-use.html)
Pre-requisites
First, start installing Terraform if you don't have it already installed - Run brew tap hashicorp/tap
and brew install hashicorp/tap/terraform
if you are on Macbook. For other Operating Systems, check the docs.
Second, install AWS CLI and configure your AWS credentials on your machine: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html.
S3 bucket and website hosting
In this section, we will create an S3 bucket to host the static files of our website (in our case, that's the index.html we earlier created). Note that for this to work, the bucket's name in S3 should be exactly your domain name (e.g. example.com or www.example.com).
On the opposite of my previous post to schedule Lambda with Terraform, where I listed all of the resources in a single Terraform configuration file, I will try to make the Terraform files more structured in this post.
In a folder specific to infrastructure, create a provider.tf
file to define the cloud provider we're working with - that's AWS in our case, and specifying the region (Sydney - feel free to change it to the AWS region where you'd like to host your site):
provider "aws" {
profile = "default"
region = "ap-southeast-2"
}
Next, create a new file variables.tf
where we declare the variables that we'd use as part of our configuration:
variable "domainName" {
default = "www.example.com"
type = string
}
variable "bucketName" {
default = "www.example.com"
type = string
}
Now, we need to set up the bucket. To host a public static website in S3, we'd need to have the following properties on the bucket:
- Turn off "Block all public access" - this is turned off by default in Terraform, so there is nothing to do about it
- Configure "Static website hosting" enabled and provide an index document (e.g. index.html) - you could also provide an error document
- Attach a bucket policy to allow read access
create a file s3.tf
to setup our bucket:
resource "aws_s3_bucket" "example" {
bucket = var.bucketName
}
We've used the earlier declared variables in the variables.tf
file to define the bucket name.
Add the following resource to configure your bucket as a static website:
resource "aws_s3_bucket_website_configuration" "example-config" {
bucket = aws_s3_bucket.example.bucket
index_document = {
suffix = "index.html"
}
}
Let's move to attach a bucket policy:
resource "aws_s3_bucket_policy" "example-policy" {
bucket = aws_s3_bucket.example.id
policy = templatefile("s3-policy.json", { bucket = var.bucketName })
}
We're loading the policy for a file called s3-policy.json
- create this file adding the following content to it:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::${bucket}/*"
}
]
}
That's mainly everything we need to do to get our website up. Now run the following commands to deploy your infrastructure:
-
terraform plan
to see the resources that are going to be provisioned (or changed) -
terraform apply
to deploy them
Navigate to the AWS Console, and verify your bucket was created and has:
-
Block all public access
turned off under the Permissions tab - Correct bucket policy under the Permissions tab
-
Static website hosting
enabled under the Properties tab
We still need to upload the index.html - I am used to using the AWS CLI from the CI/CD pipeline, but we could also do that using Terraform. Here's how to do it:
resource "aws_s3_object" "example-index" {
bucket = aws_s3_bucket.example.id
key = "index.html"
source = "../src/index.html"
acl = "public-read"
}
Validate that index.html got created inside your bucket in the AWS console. Navigate to the bottom of the Properties tab, and click the "Bucket website endpoint" link - this will render the content of your index.html on a new page.
Note S3 assigns a content type of
binary/octet-stream
by default to any uploaded files. So this means the browser will dowload index.html when you request your site rather than opening the file and rendering HTML. To fix this, you could add:content_type = "text/html"
to the block uplaoding the index.html file. Of course this is not sustainable with more files and types, but in that case you could create a JSON mapping file that would explain what each file type is (e.g.{".html": "text/html"}
), then loop over each file to set the content type.I found it easier to use the AWS CLI to upload your files rather than using Terraform.
Configure Route 53 to point to our newly created website
If you have registered your Domain in AWS Route 53, you will find a hosted zone automatically created. Otherwise, if your domain is registered elsewhere, then use the following configuration to create a hosted zone:
resource "aws_route_53" "exampleDomain" {
name = var.domainName
}
Create an A record to map your domain name to the S3 bucket:
resource "aws_route53_record" "exampleDomain-a" {
zone_id = aws_route53_zone.exampleDomain.zone_id
name = var.domainName
type = "A"
alias {
name = aws_s3_bucket.example.website_endpoint
zone_id = aws_s3_bucket.example.hosted_zone_id
evaluate_target_health = true
}
}
If you skipped the creation of the hosted zone, you can retrieve it through command line: aws route53 list-hosted-zones
, then use the ID and domain in their corresponding fields in the previous snippet.
NOTE I had my domain already registered, so had an existing hosted zone in route 53, and got a new hosted zone created with the new terraform plan and ended up with 2 hosted zone for the same domain. I deleted the original hosted zone and that was a mistake because DNS resolvers typically cache the name of name servers for 2 days, which put my domain offline for 2 days. If you got into similar problems, this page is very helpful: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/troubleshooting-new-dns-settings-not-in-effect.html#troubleshooting-new-dns-settings-not-in-effect-updated-wrong-hosted-zone
I hope this was helpful. Please let me know your thoughts.
Top comments (1)
I am using terraform for the first time, this article helped a lot, thank you!
just one typo: instead of
resource "aws_route_53" "exampleDomain" {
i think you meant to say
resource "aws_route53_zone" "exampleDomain" {