To host static website using Amazon S3 has become common thing to do. It's a fun and most popular hands-on project to learn AWS. AWS even made the tutorial for it. If you search s3 static website
, you will see burst of articles for that topic.
Then, I thought of an easy, fast, and repeatable way to do instead of having many clicks through the console. So, I'm aiming to ansible. What's the advantage? Remember that S3 has "predictable" ARN and also URL for the bucket and object as the website endpoints. So, we have all the details or values we need at begin for the whole tasks. In one go, just by hitting the Enter key once to run the ansible playbook command. We can access all the websites directly soon after ansible has done with the tasks.
Prerequisites:
- Install ansible on localhost.
- Install aws collection by simply running:
ansible-galaxy collection install amazon.aws
andansible-galaxy collection install community.aws
. Here we will use three modules:amazon.aws.s3_bucket
to create and manage the bucket,community.aws.s3_sync
to upload multiple files, andcommunity.aws.s3_website
for website settings. - Install AWS CLI and set up the credentials on localhost.
What do we need?
- Ansible playbook, consisting of an inventory and a yaml file to place the tasks.
- Policy document for the bucket.
- Any files for the website, such as HTML and so on.
- Enable static website for the bucket.
The file hierarchy:
(Please ensure to have the same as following)
s3
├── dhonas3
│ ├── 404.html
│ ├── dhonas3-policy.json
│ ├── error.png
│ ├── index.html
│ └── s3web.png
├── host.yml
├── nuruls3
│ ├── 404.html
│ ├── error.png
│ ├── index.html
│ ├── nuruls3-policy.json
│ └── s3web.png
└── s3.yml
1. Ansible Playbook
- Inventory
The inventory will go with localhost as the target host. The following inventory is in YAML format. I named the file host.yml.
all:
hosts:
localhost:
- Playbook Task
I named the file as s3.yml. I'll create the tasks into many parts (in case you just need to run specific task later).
- name: s3
hosts: localhost
connection: local
gather_facts: no
tasks:
To create the bucket, please ensure that the bucket you will create is available because it should be unique globally. In this case, I use my name followed by s3 behind it. Don't forget to set the region as well because here I'll create two buckets in Indonesia. I also use loop for repeatable action to specify the multiple buckets.
- name: create bucket
amazon.aws.s3_bucket:
name: "{{ item }}"
state: present
region: ap-southeast-3
loop: [nuruls3,dhonas3]
Then, we have to make the buckets accessible to the public.
- name: enable public access
amazon.aws.s3_bucket:
name: "{{ item }}"
state: present
public_access:
block_public_policy: false
loop: [nuruls3,dhonas3]
2. Policy Document
We also need to add policy that grants public read access for the website. See here for more information about website access permission.
- name: add policy to bucket
amazon.aws.s3_bucket:
name: "{{ item.bucket }}"
policy: "{{ item.policy }}"
loop:
- { bucket: "nuruls3", policy: "{{ lookup('file','nuruls3/nuruls3-policy.json') }}" }
- { bucket: "dhonas3", policy: "{{ lookup('file','dhonas3/dhonas3-policy.json') }}" }
The policy documents should be look like these:
Note*: Don't forget to replace the bucket name on the Resource
section!
Resource format (path to bucket):
"Resource": "arn:aws:s3:::[bucketname]/*"
Policy for nuruls3 bucket (named nuruls3-policy.json
):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "StaticWebsite",
"Principal": "*",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::nuruls3/*"
}
]
}
Policy for dhonas3 bucket (named dhonas3-policy.json
):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "StaticWebsite",
"Principal": "*",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::dhonas3/*"
}
]
}
3. Upload Files
In this case, I use simple HTML code to display an image for the index and error page (you can use your own files if you already prepared).
- Index file
As I mentioned above, I'll display an image for the website. The image I use is the object that I'll upload on the each bucket named s3web.png. So, I'll use the "predictable" object's URL to call the image. I named the file as index.html.
Object format (path to object inside bucket):
http://[bucketname].s3.[region].amazonaws.com/[objectname]
Index for nuruls3
:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>nuruls3</title>
</head>
<body>
<p align="center"><img src="http://nuruls3.s3.ap-southeast-3.amazonaws.com/s3web.png" width="50%"></p>
</body>
</html>
Index for dhonas3
:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>dhonas3</title>
</head>
<body>
<p align="center"><img src="http://dhonas3.s3.ap-southeast-3.amazonaws.com/s3web.png" width="50%"></p>
</body>
</html>
Don't forget to provide image for index file named s3web.png. You can use the image below as example.
For the upload tasks, I divide them into 2 tasks. The first task will upload multiple files inside a folder but exclude 2 kinds of files from being uploaded to each bucket.
The reason why I exclude the files:
The first one is 404.html, because I'll upload it inside a key prefix (I just want to show you that we can do it with ansible and I'll tell you below). The second one is the json file which is the policy document, because it's not related to the website but I place them in the same folder. So they must be excluded from being uploaded directly to the bucket.
- name: upload object to bucket
community.aws.s3_sync:
bucket: "{{ item.bucket }}"
file_root: "{{ item.src }}"
permission: public-read
include: "*"
exclude: "404.html,*.json"
loop:
- { bucket: "nuruls3", src: "nuruls3" }
- { bucket: "dhonas3", src: "dhonas3" }
- Error file
Just the same as the home page, I'll display an image for the error page as well. For the error page, I use image named error.png. Then, I named the file as 404.html and the files look like this:
Error page for nuruls3
:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>nuruls3</title>
</head>
<body>
<p align="center"><img src="http://nuruls3.s3.ap-southeast-3.amazonaws.com/error.png" width="50%"></p>
</body>
</html>
Error page for dhonas3
:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>dhonas3</title>
</head>
<body>
<p align="center"><img src="http://dhonas3.s3.ap-southeast-3.amazonaws.com/error.png" width="50%"></p>
</body>
</html>
Don't forget to provide image for error file named error.png. You can use the image below as example.
Then, for the second upload task. I'll upload the error page to the prefix inside the bucket. Use key_prefix
argument to specify the prefix name.
- name: upload object to bucket with specific key prefix
community.aws.s3_sync:
bucket: "{{ item.bucket }}"
file_root: "{{ item.src }}"
permission: public-read
key_prefix: "{{ item.dst }}"
loop:
- { bucket: "nuruls3", src: "nuruls3/404.html", dst: "error" }
- { bucket: "dhonas3", src: "dhonas3/404.html", dst: "error" }
4. Enable Static Website
And last but not least! We have to enable static website of the bucket. You can specify the home page on the suffix
argument and the error page on the error_key
argument.
- name: enable static website
community.aws.s3_website:
name: "{{ item }}"
suffix: index.html
error_key: error/404.html
state: present
loop: [nuruls3,dhonas3]
Now, let's run the playbook! Hit the Enter button!
( This is what I mean as one hit mentioned on the title :) )
Check if the websites can be accessible to the public!
Delete Bucket (Optional)
In case you have followed all the steps above as practice and you want to delete the buckets because they can increase your bill :)
Please add the following task to the playbook. We just need to change the state from present
to absent
and add force
argument with the value yes
to delete all the prefix and objects inside the bucket.
- name: delete bucket
amazon.aws.s3_bucket:
name: "{{ item }}"
state: absent
force: yes
loop: [nuruls3,dhonas3]
tags: delete_s3
Because I place it to the same file as the creation one (named s3.yml), so I'll use tag when I run the deletion task. We can specify it by adding --tags delete_s3
or -t delete_s3
for short behind the command.
That's it! Follow me to get notified when new post is published by me! Thank you.
References:
https://docs.ansible.com/ansible/latest/collections/amazon/aws/s3_bucket_module.html
https://docs.ansible.com/ansible/latest/collections/community/aws/s3_sync_module.html
https://docs.ansible.com/ansible/latest/collections/community/aws/s3_website_module.html
Top comments (0)