DEV Community

Anna Aitchison
Anna Aitchison

Posted on • Edited on

How to Host a Static Website on AWS with HTTPS and CI/CD

Overview

S3 is the obvious place to host a static (frontend code only) website on AWS. It's a simple, serverless way to store and serve files without running a server or fiddling with a storage server, scales effortlessly, and is very inexpensive, with a free tier and pay per request modal.

In theory, all you have to do is dump some files in a S3 bucket, set permissions on the bucket to allow public access and static site hosting, and forward your domain to it with a CNAME DNS value. In practice however, this approach has two issues: S3 buckets by themselves don't support HTTPS, and you need to upload files manually to S3. This article goes over a slightly more advanced setup with CloudFront for caching and HTTPS, and GitHub Actions for CI/CD.

There are much easier free or virtually free options for hosting static sites such as GitHub pages, but if you want control over your infrastructure, a production website, or a bit of AWS experience to show off, this is a great way to go.

Assumptions

This article assumes that you're already setup on AWS, have a domain or subdomain you want to use, and have code in GitHub.

S3 Bucket

The files will be stored in a S3 bucket. The name doesn't really matter, but you need to enable static website hosting on the bucket and allow public read access to it.

First, go to the Properties tab on the S3 bucket's page, and enable static web hosting. Take note of the bucket's website URL. Go to the Permissions tab and click edit under "Block public access (bucket settings)". Untick all the checkboxes and save the changes. Add the following policy to the bucket policy.

{
    "Version": "2012-10-17",
    "Id": "Policy1589309574299",
    "Statement": [
        {
            "Sid": "Stmt1589309569196",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "REPLACE_WITH_BUCKET_ARN/*"
        }
    ]
}
Enter fullscreen mode Exit fullscreen mode

HTTPS Certificate

Create a HTTPS certificate for your domain or subdomain in the AWS Certificate Manager. You must use the North Virginia AWS region for this certificate to be seen by CloudFront, no matter what region you set your CloudFront distribution up in. If you don't have your domain in AWS Route 53, you'll need to verify that you own the domain/subdomain by setting some DNS records on it. As long as the certifcate is public, which it has to be for this purpose, it's free to create, store and use.

CloudFront

You also need to create a CloudFront web distribution. Most of the settings don't really matter for this to work, here are the ones that do:

  • Origin Domain Name - CloudFront provides a handy dropdown list, but this fills the field in with the S3 bucket's API URL, which works but doesn't provide automatic redirects from a folder to index.html and lacks a couple of other convenience features. You'll almost always want to use the bucket's static website URL instead (you'll find it under the bucket's Properties tab)
  • Origin Path - Leave blank if you want to use all files in the bucket. Asterisks don't work - they're taken literally.
  • Alternate Domain Names (CNAMEs) - List the domain names that the distribution will be accessed by
  • SSL Certificate - Choose a custom SSL certificate. This choice only becomes active after CloudFront detects a SSL cert in CM in the correct region. Takes some time after it's done to actually register it.

DNS

Forward your domain/subdomain to the CloudFront distribution's URL (*.cloudfront.net) with a CNAME DNS entry. If you're not using Route 53, you won't be able to forward the root domain to CloudFront out of the box, but there are a few free services that'll do it for you.

Github Actions

GitHub Actions are a simple yet effective CI/CD solution integrated right into GitHub. You can find out more here. It's free for public repos and has a decent free trial for others. Most of the work for this action is already done - there's a couple of excellent pre baked actions. I find that reggionick/s3-deploy works the best for this scenario - it removes old files from the S3 bucket, adds new ones and invalidates the CloudFront cache all in one go. You simply need to use the example action in that repo's readme, add, change or remove the build steps, create the needed repository secrets and add the workflow to your repo. You might want to change the trigger to be triggered only on push to the master branch and change the folder (location where the deployable assets are/end up relative to repo root).

The secrets you need are:

  • AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY - AWS credentials. Hopefully for a programmatic access only IAM user with only the access needed to list, add and remove objects from the specific S3 bucket and to invalidate the cache on the CloudFront distribution.
  • S3_BUCKET - Name of S3 bucket
  • S3_BUCKET_REGION - Region S3 bucket was created in
  • CLOUDFRONT_DISTRIBUTION_ID - ID of CloudFront distribution

If you add the secrets first and already have code in your repo, when you commit your workflow into the repo, it should run successfully and you'll hopefully have a working website.

Top comments (15)

Collapse
 
mjgs profile image
Mark Smith

Nice article, it’s good to have a clear and to the point explainer on how to set a basic static site up on AWS.

Do you happen to know if it’s possible to implement branch previews similar to the feature Netlify offers?

Collapse
 
ara225 profile image
Anna Aitchison

That's an interesting point. Off the top of my head, the easiest way to do it would probably be having a second setup on a subdomain and deploying previews to random paths in the S3 bucket by customizing the action. It's probably worth looking into other ways of handling this though - I'd imagine a good few people have already run into this.

Collapse
 
mjgs profile image
Mark Smith • Edited

When you say β€˜second setup’ do you mean another GitHub action?

Ideally pushes to any branches but main would be built and deployed to a randomly generated subdomain.

I’ve got my blog on GitHub pages at the minute but I haven’t found a way to do branch previews, so I’m always having to test code in the live site.

Thread Thread
 
ara225 profile image
Anna Aitchison

I would have a second S3 bucket & Cloudront distribution setup to be accessable from a subdomain of your site - so basically the same setup, and have some logic in the Gitub Actions to deploy branches to random folders in the S3 bucket. Doing a new subdomain for each one would probaly be a lot harder, especially if you're not using Route53 for the domain as there's no easy way to automate that with most mainstream domain providers.

Thread Thread
 
mjgs profile image
Mark Smith

So a bit like having a staging and a production environment, then always deploy the non-main branches to staging, and staging environment is always accessible via the same subdomain (e.g. staging.example.com).

Is that what you mean?

I guess that could work as long as each developer had their own separate staging environment.

Thread Thread
 
ara225 profile image
Anna Aitchison

Yeah pretty much though you could go as complicated or as simple as you want.

Thread Thread
 
mjgs profile image
Mark Smith • Edited

Yeah I think fir solo devs it’s probably good enough to have just one staging env.

Though integrating route53 would be really awesome because you could send links to clients for them to test out work in progress, and have multiple preview branches live at the same time.

I’d love to see a simple and clear tutorial on how to use route53 in that way. Perhaps you can recommend one?

Collapse
 
wparad profile image
Warren Parad

I'll do you one better, here's a CFN stack ready to go to automatically create the resources you need to run a site in S3. Just update the files in the v1/ prefix in S3 and the site will be available.

website CFN template

Collapse
 
timo_ernst profile image
Timo Ernst • Edited

If you don't have a custom domain but instead just want to get a HTTPS certificate for your AWS bucket's website endpoints follow these steps:

  1. Open the CloudFront console console.aws.amazon.com/cloudfront/
  2. Choose Create Distribution.
  3. Under Web, choose Get Started.
  4. For Origin Domain Name enter your S3 bucket's website endpoint.
  5. For Viewer Protocol Policy, choose HTTP and HTTPS (Note: Choosing HTTPS Only blocks all HTTP requests).

Instructions from: aws.amazon.com/premiumsupport/know...

To find the url to your static site, open your CloudFront distribution and you should see the domain under "Domain Name". It should look something like 123whatever.cloudfront.net.

You should now be able to access your statically hosted site on S3 bucket via https://123whatever.cloudfront.net/index.html (Replace 123whatever with the subdomain that was given to you)

Note: I noticed that you must specify the filename at the end of the url, so in this case index.html. If you don't do that you will get an error. Does anyone know how to fix this, so the site would be available via https://123whatever.cloudfront.net/index.html?

Collapse
 
eduardokanema profile image
Eduardo Stefani Pacheco

Nice article. Another alternative to host a static or dynamic site is the new AWS Amplify. It's a plug and play system that does from the CI to the hosting.

Cheers

Collapse
 
tominflux profile image
Tom

Seeing everything going on AWS is beggining to feel a bit dystopian

Collapse
 
aminmansuri profile image
hidden_dude

you could put it on Oracle or MS or Google..

(oh.. wait.. equally distopian ;) )

Collapse
 
tominflux profile image
Tom • Edited

mkorostoff.github.io/1-pixel-wealth/

AWS is what makes Bezos so unfathomably wealthy.

Collapse
 
tominflux profile image
Tom

Scenarios where everyone is using just one platform, a monopoly, are pretty freaky!

Collapse
 
kzzm profile image
Kris M.

Interesting article. A nice addition would be to compare using AWS S3 to Github pages.