Introduction
AWS s3 makes it easy to host (and serve) a static website within no time, and what's cool about it is, it's pay as you go! Let's learn how we can host a website on aws s3 using terraform.
Step1: Setup AWS cli (ignore if already installed)
Whether it's your local machine or CI/CD platform, you will need to setup AWS-CLI there.
- Install - Assuming you have python installed you can just run this command on your terminal/cmd -
pip install awscli
Or pip3
if you have multiple python version installed.
Else you can just follow the official aws cli installation doc.
- Configure CLI Run this command on terminal -
aws configure
And paste your access key, secrete access key and region and default output format as prompted -
AWS Access Key ID: MYACCESSKEY
AWS Secret Access Key: MYSECRETKEY
Default region name [us-west-2]: my-aws-region
Default output format [None]: json
That's it. Now you can start running terraform code and start making magic happen.
Step1: Organize directories
For this project we will have two directories -
- app, Here all the website related files will reside.
- Terraform, Here we will write terraform configuration files.
Everyone has their own preferences, I find it very easy to orginze codes this way.
Step2: Create website file(s)
For simplicity of demonstration I will use this code for my website and save it as index.html
in the app
directory. Well, ChatGPT generated this, because I am lazy.
<!DOCTYPE html>
<html>
<head>
<title>My Simple Website</title>
<style>
body {
font-family: Arial, sans-serif;
background-color: #f2f2f2;
margin: 0;
padding: 20px;
}
h1 {
color: #333333;
}
p {
color: #666666;
margin-bottom: 10px;
}
.container {
max-width: 600px;
margin: 0 auto;
background-color: #ffffff;
padding: 20px;
border-radius: 5px;
box-shadow: 0 2px 5px rgba(0, 0, 0, 0.1);
}
</style>
</head>
<body>
<div class="container">
<h1>Welcome to My Simple Website!</h1>
<p>This is a demonstration of a simple website hosted on AWS S3 using Terraform.</p>
<p>Feel free to customize this website and experiment with Terraform to deploy it.</p>
</div>
</body>
</html>
Now your app directory should look like this -
Step4: Write terraform configurations
Now we will write terraform configuration files. We will create three .tf
configuration files -
- variables.tf
- main.tf
- bucket.tf
1. Write the 'vairables.tf' file
We can use variables
file in terraform to declare variables and use in other places. In our case we will store the region and s3 bucket name, make sure to change according to your need.
variable "aws_region" {
default = "ap-south-1"
description = "your aws region"
}
variable "s3_bucket_name" {
default = "static-website-s3-bucket"
description = "name of s3 bucket. store website files in this bucket."
}
2. Write the main.tf file
Here we will declare cloud provider information (aws, terraform supports a lot).
provider "aws" {
region = var.aws_region
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.30"
}
}
}
3. Write the bucket.tf file
This is where all the magic happens. There will be multiple blocks, I will break them down for you and at the end you will get the whole code.
3.1. Create the s3 bucket
This little portion is enough to create a s3 bucket. See it's taking name from the variable file using the var keyword.
resource "aws_s3_bucket" "s3_bucket" {
bucket = var.s3_bucket_name
}
3.2. Make the bucket and objects public
Here we change the bucket configuration and make it public.
# make the objects public for the website
resource "aws_s3_bucket_public_access_block" "s3_bucket_access_block" {
bucket = aws_s3_bucket.s3_bucket.id
block_public_acls = false
block_public_policy = false
ignore_public_acls = false
restrict_public_buckets = false
}
# add bucket ownership control to bucket owner
resource "aws_s3_bucket_ownership_controls" "s3_bucket_ownership_ctrl" {
bucket = aws_s3_bucket.s3_bucket.id
rule {
object_ownership = "BucketOwnerPreferred"
}
depends_on = [aws_s3_bucket_public_access_block.s3_bucket_access_block]
}
# making the s3 bucket public
# allow acls
resource "aws_s3_bucket_acl" "s3_bucket_acl" {
depends_on = [ aws_s3_bucket_ownership_controls.s3_bucket_ownership_ctrl,
aws_s3_bucket_public_access_block.s3_bucket_access_block,
]
bucket = aws_s3_bucket.s3_bucket.id
acl = "public-read"
}
3.3. Attach bucket policy
Attach bucket policy so that the objects can be read
resource "aws_s3_bucket_policy" "s3_bucket_policy" {
depends_on = [aws_s3_bucket_public_access_block.s3_bucket_access_block]
bucket = aws_s3_bucket.s3_bucket.id
policy = jsonencode(
{
"Version": "2008-10-17",
"Id": "ContentsAllow",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::${var.s3_bucket_name}/*",
"arn:aws:s3:::${var.s3_bucket_name}",
]
}
]
}
)
}
3.4. Provision/send files to s3
Now we will send the website file (index.html) from app directory to the s3 bucket.
Here is a catch. If you do not set the file type of files, aws s3 will make you download it. But it won't host it. In order to do so, we will need to create a file named mime.json
, you can declare text files, image files and their types here. In our case I just added mime type of .html files. Create a mime.json
files inside terraform directory -
{
".html": "text/html"
}
In short, mime type refers to file type. Google for more in depth information.
Now we can read the mime.json
file, then upload objects along with their content type -
# load mime types
locals {
mime_types = jsondecode(file("mime.json"))
}
# send website files to s3 bucket
resource "aws_s3_object" "provisiton_app_files_to_s3" {
bucket = aws_s3_bucket.s3_bucket.id
for_each = fileset("../app/", "**/*.*")
key = each.value
source = "../app/${each.value}"
etag = filemd5("../app/${each.value}")
content_type = lookup(local.mime_types, regex("\\.[^.]+$", each.value), null)
}
3.5. Add bucket CORS configuration
CORS stands for 'Cross-Origin Resource Sharing', it allows to access a website from other domain. For this case I am using wild card. But more restricted permission is advised.
resource "aws_s3_bucket_cors_configuration" "s3_bucket_cors" {
bucket = aws_s3_bucket.s3_bucket.id
cors_rule {
allowed_headers = ["*"]
allowed_methods = ["GET", "POST"]
allowed_origins = ["*"]
expose_headers = []
max_age_seconds = 3000
}
}
Finally,
3.6. Set up static website
Now we serve the index.html
file -
resource "aws_s3_bucket_website_configuration" "static_site" {
bucket = aws_s3_bucket.s3_bucket.bucket
index_document {
suffix = "index.html"
}
}
bucket.tf full code
resource "aws_s3_bucket" "s3_bucket" {
bucket = var.s3_bucket_name
}
# make the objects public for the website
resource "aws_s3_bucket_public_access_block" "s3_bucket_access_block" {
bucket = aws_s3_bucket.s3_bucket.id
block_public_acls = false
block_public_policy = false
ignore_public_acls = false
restrict_public_buckets = false
}
# add bucket ownership control to bucket owner
resource "aws_s3_bucket_ownership_controls" "s3_bucket_ownership_ctrl" {
bucket = aws_s3_bucket.s3_bucket.id
rule {
object_ownership = "BucketOwnerPreferred"
}
depends_on = [aws_s3_bucket_public_access_block.s3_bucket_access_block]
}
# making the s3 bucket public
# allow acls
resource "aws_s3_bucket_acl" "s3_bucket_acl" {
depends_on = [ aws_s3_bucket_ownership_controls.s3_bucket_ownership_ctrl,
aws_s3_bucket_public_access_block.s3_bucket_access_block,
]
bucket = aws_s3_bucket.s3_bucket.id
acl = "public-read"
}
resource "aws_s3_bucket_policy" "s3_bucket_policy" {
depends_on = [aws_s3_bucket_public_access_block.s3_bucket_access_block]
bucket = aws_s3_bucket.s3_bucket.id
policy = jsonencode(
{
"Version": "2008-10-17",
"Id": "ContentsAllow",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::${var.s3_bucket_name}/*",
"arn:aws:s3:::${var.s3_bucket_name}",
]
}
]
}
)
}
# load mime types
locals {
mime_types = jsondecode(file("mime.json"))
}
# send website files to s3 bucket
# ref: https://engineering.statefarm.com/blog/terraform-s3-upload-with-mime/
resource "aws_s3_object" "provisiton_app_files_to_s3" {
bucket = aws_s3_bucket.s3_bucket.id
for_each = fileset("../app/", "**/*.*")
key = each.value
source = "../app/${each.value}"
etag = filemd5("../app/${each.value}")
content_type = lookup(local.mime_types, regex("\\.[^.]+$", each.value), null)
}
resource "aws_s3_bucket_cors_configuration" "s3_bucket_cors" {
bucket = aws_s3_bucket.s3_bucket.id
cors_rule {
allowed_headers = ["*"]
allowed_methods = ["GET", "POST"]
allowed_origins = ["*"]
expose_headers = []
max_age_seconds = 3000
}
}
resource "aws_s3_bucket_website_configuration" "static_site" {
bucket = aws_s3_bucket.s3_bucket.bucket
index_document {
suffix = "index.html"
}
}
You will find the full code here.
Apply changes to AWS
Now we will apply all the things on AWS. Go to you Terraform directory, right click on start a terminal there and type -
terraform init
This will download necessary things, modules needed. After that you can plan (terraform plan
to see changes) or simply apply using -
terraform apply
It will plan first and ask for your permission. If you want it to auto approve just type -
terraform apply -auto-approve
After that you will see the changes on your aws account. Go to AWS management console. Navigate to your s3 bucket. And click on 'properties', scroll down and you will get the website url
I did a better trick. It's tiresome to get the website endpoint on aws management console, so I created a output.tf
file -
output "website_url" {
value = aws_s3_bucket.s3_bucket.website_endpoint
}
This outputs the website url right in my terminal.
After playing around, to stop billing destroy by typing -
terraform destroy -auto-approve
Conclusion
AWS is fun, terraform makes it even better. Serving a website from s3 is easy. But the work is not yet done. Attaching with cloudfront will lessen the cost and make it even better. I will write on it soon.
Again, you will find the codes in this repository. Keep exploring, keep learning. Best wishes.
Top comments (0)