DEV Community

Cover image for Building a Static Website in S3 with AWS Code Pipeline
Kaye Alvarado for AWS Community Builders

Posted on • Edited on

Building a Static Website in S3 with AWS Code Pipeline

Introduction

This is a write-up of a talk I did on the same topic for PWA Pilipinas and AWS Siklab Pilipinas last March 20, 2022. I talked about a few basic AWS services to build up the most common architecture of a static website namely: S3, CloudFront, Route53, ACM, and IAM.

...aaaand, just in case you're still not aware: AWS launched a role-playing game called Cloud Quest in the skillbuilder website where building a static website is the first task!

What is a Static Website?

Static Websites are nothing but a collection of lightweight static files (these can be html, css, javascript, image files), and these are served by a host to the web browser or the client accessing the site, exactly as they stored. Regardless if user1 tries to access the homepage of the site (index.html), and user2 tries to access the same, they will get exactly the same content. It’s static (does not change with conditions) as opposed to a dynamic website which can differ from user to user.
Image description
Basic design of a static website is you have client browsers (it can either be from a computer, or mobile) accessing a domain (www.example.com) and behind this domain, it points to a webserver that has the static files being requested by the client. So it’s served via HTTP/HTTPS.
In a cloud architecture, where things can be serverless and the user has no knowledge of the underlying server infrastructure, it would look something like this:
Image description

  1. Route53 is the DNS service of AWS and is extremely reliable and cost effective way to route end users to internet apps. It can translate names such as www.example.com to the numeric IP such as 202.54.44.181 that computers use to talk to each other.
  2. Cloudfront on the one hand is the content delivery network (CDN) service of AWS. It provides caching capabilities to improve performance. It stores frequently accessed files in the edge location (which is closer to the end user) so the content doesn’t have to be retrieved from the backend repeatedly.
  3. AWS Certificate Manager (ACM) is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources.
  4. We also have S3. It’s an object storage service (it’s AWS managed meaning you don’t own the server maintenance). It provides high availability, scalability, security and performance. In this architecture, we'll deploy a React application which is popular framework for making dynamic web applications, but on this demo, we will execute the build process to generate static files for deployment.

Building the Infrastructure Step-by-Step

Pre-Requisites

  1. Node
  2. Git
  3. AWS-CLI

Create the React App

  1. Run the following commands to create a working directory and create a boilerplate react application Image description Once done, it will show up something like below: Image description
  2. Start the application locally Image description It will load up in a browser in your localhost Image description Now you know it works!
  3. To create the static files to deploy, run the following command Image description You should then see a build folder containing the static files Image description Image description Image description Setup the DNS
  4. In AWS Management Console, go to the Route53 service, and look for Register domain. Image description .com are of course the ones that most sites use and looks legitimate, so if it's for a company/institution, you should always go for a .com. Image description
  5. Pay a minimal fee for privacy protection (this is only needed for .com websites) Image description
  6. Complete your order Image description When you register a domain with Amazon Route 53 or you transfer domain registration to Route 53, AWS configure the domain to renew automatically. The automatic renewal period is typically one year, although the registries for some top-level domains (TLDs) have longer renewal periods. Image description Your domain should show up under Pending requests first, then move to Registered domains once complete. Image description AWS will send you a confirmation email Image description ...as well as an email validation Image description Verification link will create a hosted zone. A hosted zone is an Amazon Route 53 concept. A hosted zone is analogous to a traditional DNS zone file; it represents a collection of records that can be managed together, belonging to a single parent domain name. All resource record sets within a hosted zone must have the hosted zone's domain name as a suffix. Image description Verify will send another email and that completes the Domain setup! Image description It is not pointing to anything yet, so it will not load anything when you browse it Image description Create an S3 Bucket
  7. In the AWS management console, go to S3 Image description Create two buckets namely: www.girlwhocodes.click and girlwhocodes.click. Create the first onw without www, and keep everything as default Image description Once created, do the same steps for www. Image description We should now have two buckets: Image description Create an IAM user for AWS-CLI We need to create an IAM user since the files will be uploaded using AWS-CLI. There's also an option to upload the files directly to S3 via the AWS management console, but this will demo the AWS-CLI option.
  8. Go to IAM service Image description Follow the wizard to create a user Image description Attach an AdministratorAccess to keep it simple Image description Tags are optional Image description Complete the wizard... Image description
  9. Set up the user in your local machine by configuring the AWS Access Key ID and AWS Secret Access Key of your user Image description
  10. Test out the access by running any AWS-CLI command Image description Upload the Static Files to S3
  11. Run the s3 sync command to copy your local React build files to the s3 bucket Image description
  12. You can then verify in the AWS management console if the files were uploaded Image description Update the S3 bucket permissions and Enable for Static Website Hosting Image description By default, AWS blocks public access to your S3 bucket. AWS recommends that you block all public access to your buckets. Image description
  13. For simplicity, we'll keep the s3 bucket accessible. The update will prompt for a confirmation that this is what you want to do. Image description Tada! Image description
  14. To make the objects in your bucket publicly readable, write a bucket policy that grants everyone s3:GetObject permission. Image description
  15. Enable the bucket for static website hosting (choose Enable) Image description In index document, enter the filename of the home page, typically index.html. Image description Note: For non-wwww, since we don't want to maintain 2 copies of the files, select Redirect requests for an object
  16. To quickly test this, click on the index.html file in your bucket. In the properties section, you can see an object endpoint. Image description The endpoint is the Amazon S3 website endpoint for your bucket object. Clicking on the endpoint will load up the page in a browser. Image description More Domain Configurations!
  17. Add an A record to S3. Click on hosted zone: Image description By default, you should have an NS and an SOA record. Image description An NS (name server) record indicates which DNS server is authoritative for that domain (i.e. which server contains the actual DNS records). Basically, NS records tell the Internet where to go to find out a domain's IP address. An SOA (start of authority) record stores important information about a domain or zone such as the email address of the administrator, when the domain was last updated, and how long the server should wait between refreshes. All DNS zones need an SOA record in order to conform to IETF standards. Proceed to create the www "A record" and route this to S3. Image description
  18. Add another "A record" for the non-www domain Image description Once done, you should now have the following records Image description Since your domain now points to S3, it should already load once viewed from a browser Image description You should note that this website is served from AmazonS3 when you view the response headers for Server. Image description Since this website is unsecure, we also need to setup CloudFront and attach a TLS certificate. Request a Public Certificate in ACM
  19. Go to the ACM service. Make sure to request or import the certificate in the US East (N.Virginia) region. Image description
  20. Follow the wizard to request a public certificate Image description Add both www and non-www in the fully qualified domain names Image description
  21. Choose DNS validation as the validation method because it is easier. Tags can be skipped, then click on Request. Image description It will bring you to this page showing pending validation. Click on create records. Image description The canonical name (CNAME) record is used in lieu of an A record, when a domain or subdomain is an alias of another domain. A CNAME Record is used in the Domain Name System (DNS) to create an alias from one domain name to another domain name.
  22. From ACM, you can simply click on Create records in Route53 to automatically add the CNAME records without having to copy and paste them there. Image description If you open Route53, those records should have been added. Image description In ACM, the certificate state should change from Pending Validation to Issued. Setup CloudFront
  23. Go to AWS Console and click on CloudFront Image description
  24. Add the details in the wizard. Origin in the source of the distribution, so add the S3 path there. Image description Note that we need two distributions (one for www and one for non-www).
  25. Add the CNAME record Image description
  26. In the same wizard, add the ACM certificate Image description Also, select redirect http to https. Everything else should be default so click on Create distribution. Image description Do the same steps for the non-www domain Image description If you reload the page in a browser, you'll see that content is being served by S3, but there's a response header saying it's now via a CloudFront distribution. Image description
  27. Update the Route53 www and non-www records to point now to CloudFront Image description When you reload the page, it will now show as secure Image description If you do any file changes in S3 and it doesn't load in the browser, it's because CloudFront is service cached content. Invalidate it to force it to pull again from S3. Image description

Create a Deployment Pipeline (Extra!)

As a DevOps Engineer, I have to discuss more about pipelines so we'll explore more about Code Pipeline which is another AWS service.

  1. Create a pipeline in Code Pipeline. Go through the wizard mostly with defaults. Also create a new service role. Image description Select the following options Image description
  2. For the source stage, since I've setup my code in GitHub, I will set it as my source Image description
  3. We also need to setup a connection to GitHub Image description Select GitHub here Image description Add a name to the connection and click on Connect Image description There should be a one-time setup for the verification of the connection Image description Select the repositories that you want to allow AWS access to Image description It should set up a code for you, then click on Connect to complete the wizard Image description
  4. Proceed to the other steps of the wizard Image description Note: I skipped the build stage for now since I am building the files locally. Image description
  5. In the deploy stage, select Amazon S3 as the provider Image description
  6. Review the pipeline in the final step Image description Then create! Image description The pipeline will be created, but will show up with red marks Image description If you click the error, it shows the reason that the S3 bucket does not allow ACLs yet Image description This can be edited in S3 via the Management Console Image description ...and clear up the red marks in the pipeline Image description

Test the Pipeline

  1. Update the files of your static website with something as simple as updating the paragraph Image description
  2. Test the change locally Image description
  3. Run npm build to create new static files Image description
  4. Push the code in GitHub Image description
  5. The pipeline should get triggered with the push action Image description
  6. The files will be uploaded to S3 Image description
  7. Invalidate the distribution cache again to see the changes reflect Image description This now completes the static website architecture with a pipeline for continuous deployment! Image description

There is a Part 2 of this article that I wrote where the manual process of the architecture build-up is done with Terraform Code. Follow the link if you want to check that out!

Top comments (0)