Hosting static websites using Amazon S3 has become a common thing to do. It's a fun and popular hands-on project to learn AWS. AWS even made the tutorial for it. If you search s3 static website
, you will see a burst of articles for that topic.
Then, I thought of an easy, fast, and repeatable way to do this instead of having many clicks through the console. So, I thought about Ansible. What's the advantage? Remember that S3 has a "predictable" ARN and also a URL for the bucket and object as the website endpoints. So, we have all the details or values we need at the beginning for the whole task. In one go, just hit the Enter key once to run the Ansible playbook command. We can access all the websites directly soon after Ansible has done the tasks.
Prerequisites:
Install ansible on localhost.
Install aws collection by simply running:
ansible-galaxy collection install amazon.aws
andansible-galaxy collection install community.aws
. Here we will use three modules:amazon.aws.s3_bucket
to create and manage the bucket,community.aws.s3_sync
to upload multiple files andcommunity.aws.s3_website
for website settings.Install AWS CLI and set up the credentials on localhost.
What do we need?
Ansible playbook, consisting of an inventory and a YAML file to place the tasks.
Policy document for the bucket.
Any files for the website, such as HTML and so on.
Enable a static website for the bucket.
The file hierarchy: (Please ensure to have the same as following)
s3
├── dhonas3
│ ├── 404.html
│ ├── dhonas3-policy.json
│ ├── error.png
│ ├── index.html
│ └── s3web.png
├── host.yml
├── nuruls3
│ ├── 404.html
│ ├── error.png
│ ├── index.html
│ ├── nuruls3-policy.json
│ └── s3web.png
└── s3.yml
1. Ansible Playbook
- Inventory
The inventory will go with localhost as the target host. The following inventory is in YAML format. I named the file host.yml.
all:
hosts:
localhost:
- Playbook Task
I named the file as s3.yml. I'll create the tasks into many parts (in case you just need to run specific tasks later).
- name: s3
hosts: localhost
connection: local
gather_facts: no
tasks:
To create the bucket, please ensure that the bucket you will create is available because it should be unique globally. In this case, I use my name followed by s3 behind it. Don't forget to set the region as well because here I'll create two buckets in Indonesia. I also use a loop for repeatable action to specify the multiple buckets.
- name: create bucket
amazon.aws.s3_bucket:
name: "{{ item }}"
state: present
region: ap-southeast-3
loop: [nuruls3,dhonas3]
Then, we have to make the buckets accessible to the public.
- name: enable public access
amazon.aws.s3_bucket:
name: "{{ item }}"
state: present
public_access:
block_public_policy: false
loop: [nuruls3,dhonas3]
2. Policy Document
We also need to add a policy that grants public read access to the website. See here for more information about website access permission.
- name: add policy to bucket
amazon.aws.s3_bucket:
name: "{{ item.bucket }}"
policy: "{{ item.policy }}"
loop:
- { bucket: "nuruls3", policy: "{{ lookup('file','nuruls3/nuruls3-policy.json') }}" }
- { bucket: "dhonas3", policy: "{{ lookup('file','dhonas3/dhonas3-policy.json') }}" }
The policy documents should look like these:
Note*: Don't forget to replace the bucket name on the Resource
section!
Resource format (path to bucket):
"Resource": "arn:aws:s3:::[bucketname]/*"
Policy for nuruls3 bucket (named nuruls3-policy.json
):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "StaticWebsite",
"Principal": "*",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::nuruls3/*"
}
]
}
Policy for dhonas3 bucket (named dhonas3-policy.json
):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "StaticWebsite",
"Principal": "*",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::dhonas3/*"
}
]
}
3. Upload Files
In this case, I use simple HTML code to display an image for the index and error page (you can use your own files if you already prepared them).
- Index file
As I mentioned above, I'll display an image for the website. The image I use is the object that I'll upload on each bucket named s3web.png. So, I'll use the "predictable" object's URL to call the image. I named the file as index.html.
Object format (path to object inside bucket):
http://[bucketname].s3.[region].amazonaws.com/[objectname]
Index for nuruls3
:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>nuruls3</title>
</head>
<body>
<p align="center"><img src="http://nuruls3.s3.ap-southeast-3.amazonaws.com/s3web.png" width="50%"></p>
</body>
</html>
Index for dhonas3
:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>dhonas3</title>
</head>
<body>
<p align="center"><img src="http://dhonas3.s3.ap-southeast-3.amazonaws.com/s3web.png" width="50%"></p>
</body>
</html>
Don't forget to provide an image for the index file named s3web.png. You can use the image below as an example.
s3web.png
For the upload tasks, I divided them into 2 tasks. The first task will upload multiple files inside a folder but exclude 2 kinds of files from being uploaded to each bucket.
The reason why I exclude the files: The first one is 404.html because I'll upload it inside a key prefix (I just want to show you that we can do it with Ansible and I'll tell you below). The second one is the JSON file which is the policy document, because it's not related to the website but I placed them in the same folder. So they must be excluded from being uploaded directly to the bucket.
- name: upload object to bucket
community.aws.s3_sync:
bucket: "{{ item.bucket }}"
file_root: "{{ item.src }}"
permission: public-read
include: "*"
exclude: "404.html,*.json"
loop:
- { bucket: "nuruls3", src: "nuruls3" }
- { bucket: "dhonas3", src: "dhonas3" }
- Error file
Just the same as the home page, I'll display an image for the error page as well. For the error page, I use an image named error.png. Then, I named the file 404.html and the files look like this:
Error page for nuruls3
:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>nuruls3</title>
</head>
<body>
<p align="center"><img src="http://nuruls3.s3.ap-southeast-3.amazonaws.com/error.png" width="50%"></p>
</body>
</html>
Error page for dhonas3
:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>dhonas3</title>
</head>
<body>
<p align="center"><img src="http://dhonas3.s3.ap-southeast-3.amazonaws.com/error.png" width="50%"></p>
</body>
</html>
Don't forget to provide an image for the error file named error.png
. You can use the image below as an example.
error.png
Then, for the second upload task. I'll upload the error page to the prefix inside the bucket. Use key_prefix
argument to specify the prefix name.
- name: upload object to bucket with specific key prefix
community.aws.s3_sync:
bucket: "{{ item.bucket }}"
file_root: "{{ item.src }}"
permission: public-read
key_prefix: "{{ item.dst }}"
loop:
- { bucket: "nuruls3", src: "nuruls3/404.html", dst: "error" }
- { bucket: "dhonas3", src: "dhonas3/404.html", dst: "error" }
Note*: At the end of April 2023 Amazon updated the default setting to
BucketOwnerEnforced
, so please removepermission: public-read
argument because ACLs are no longer affect access permissions.
4. Enable Static Website
And last but not least! We have to enable a static website of the bucket. You can specify the home page on the suffix
argument and the error page on the error_key
argument.
- name: enable static website
community.aws.s3_website:
name: "{{ item }}"
suffix: index.html
error_key: error/404.html
state: present
loop: [nuruls3,dhonas3]
Now, let's run the playbook! Hit the Enter button! ( This is what I mean by one hit mentioned in the title :) )
Check if the websites can be accessible to the public!
nuruls3
website home page
nuruls3
error page
dhonas3
website home page
dhonas3
error page
Delete Bucket (Optional)
In case you have followed all the steps above as practice and you want to delete the buckets because they can increase your bill :)
Please add the following task to the playbook. We just need to change the state from present
to absent
and add force
argument with the value yes
to delete all the prefixes and objects inside the bucket.
- name: delete bucket
amazon.aws.s3_bucket:
name: "{{ item }}"
state: absent
force: yes
loop: [nuruls3,dhonas3]
tags: delete_s3
Because I placed it in the same file as the creation one (named s3.yml), so I'll use a tag when I run the deletion task. We can specify it by adding --tags delete_s3
or -t delete_s3
for short behind the command.
That's it! Follow me to get notified when a new post is published by me! Thank you.
References:
Top comments (0)