Introduction
This is a write-up of a talk I did on the same topic for PWA Pilipinas and AWS Siklab Pilipinas last March 20, 2022. I talked about a few basic AWS services to build up the most common architecture of a static website namely: S3, CloudFront, Route53, ACM, and IAM.
...aaaand, just in case you're still not aware: AWS launched a role-playing game called Cloud Quest in the skillbuilder website where building a static website is the first task!
What is a Static Website?
Static Websites are nothing but a collection of lightweight static files (these can be html, css, javascript, image files), and these are served by a host to the web browser or the client accessing the site, exactly as they stored. Regardless if user1 tries to access the homepage of the site (index.html), and user2 tries to access the same, they will get exactly the same content. It’s static (does not change with conditions) as opposed to a dynamic website which can differ from user to user.
Basic design of a static website is you have client browsers (it can either be from a computer, or mobile) accessing a domain (www.example.com) and behind this domain, it points to a webserver that has the static files being requested by the client. So it’s served via HTTP/HTTPS.
In a cloud architecture, where things can be serverless and the user has no knowledge of the underlying server infrastructure, it would look something like this:
- Route53 is the DNS service of AWS and is extremely reliable and cost effective way to route end users to internet apps. It can translate names such as www.example.com to the numeric IP such as 202.54.44.181 that computers use to talk to each other.
- Cloudfront on the one hand is the content delivery network (CDN) service of AWS. It provides caching capabilities to improve performance. It stores frequently accessed files in the edge location (which is closer to the end user) so the content doesn’t have to be retrieved from the backend repeatedly.
- AWS Certificate Manager (ACM) is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources.
- We also have S3. It’s an object storage service (it’s AWS managed meaning you don’t own the server maintenance). It provides high availability, scalability, security and performance. In this architecture, we'll deploy a React application which is popular framework for making dynamic web applications, but on this demo, we will execute the build process to generate static files for deployment.
Building the Infrastructure Step-by-Step
Pre-Requisites
- Node
- Git
- AWS-CLI
Create the React App
- Run the following commands to create a working directory and create a boilerplate react application Once done, it will show up something like below:
- Start the application locally It will load up in a browser in your localhost Now you know it works!
- To create the static files to deploy, run the following command You should then see a build folder containing the static files Setup the DNS
- In AWS Management Console, go to the Route53 service, and look for Register domain. .com are of course the ones that most sites use and looks legitimate, so if it's for a company/institution, you should always go for a .com.
- Pay a minimal fee for privacy protection (this is only needed for .com websites)
- Complete your order When you register a domain with Amazon Route 53 or you transfer domain registration to Route 53, AWS configure the domain to renew automatically. The automatic renewal period is typically one year, although the registries for some top-level domains (TLDs) have longer renewal periods. Your domain should show up under Pending requests first, then move to Registered domains once complete. AWS will send you a confirmation email ...as well as an email validation Verification link will create a hosted zone. A hosted zone is an Amazon Route 53 concept. A hosted zone is analogous to a traditional DNS zone file; it represents a collection of records that can be managed together, belonging to a single parent domain name. All resource record sets within a hosted zone must have the hosted zone's domain name as a suffix. Verify will send another email and that completes the Domain setup! It is not pointing to anything yet, so it will not load anything when you browse it Create an S3 Bucket
- In the AWS management console, go to S3 Create two buckets namely: www.girlwhocodes.click and girlwhocodes.click. Create the first onw without www, and keep everything as default Once created, do the same steps for www. We should now have two buckets: Create an IAM user for AWS-CLI We need to create an IAM user since the files will be uploaded using AWS-CLI. There's also an option to upload the files directly to S3 via the AWS management console, but this will demo the AWS-CLI option.
- Go to IAM service Follow the wizard to create a user Attach an AdministratorAccess to keep it simple Tags are optional Complete the wizard...
- Set up the user in your local machine by configuring the AWS Access Key ID and AWS Secret Access Key of your user
- Test out the access by running any AWS-CLI command Upload the Static Files to S3
- Run the s3 sync command to copy your local React build files to the s3 bucket
- You can then verify in the AWS management console if the files were uploaded Update the S3 bucket permissions and Enable for Static Website Hosting By default, AWS blocks public access to your S3 bucket. AWS recommends that you block all public access to your buckets.
- For simplicity, we'll keep the s3 bucket accessible. The update will prompt for a confirmation that this is what you want to do. Tada!
- To make the objects in your bucket publicly readable, write a bucket policy that grants everyone s3:GetObject permission.
- Enable the bucket for static website hosting (choose Enable) In index document, enter the filename of the home page, typically index.html. Note: For non-wwww, since we don't want to maintain 2 copies of the files, select Redirect requests for an object
- To quickly test this, click on the index.html file in your bucket. In the properties section, you can see an object endpoint. The endpoint is the Amazon S3 website endpoint for your bucket object. Clicking on the endpoint will load up the page in a browser. More Domain Configurations!
- Add an A record to S3. Click on hosted zone: By default, you should have an NS and an SOA record. An NS (name server) record indicates which DNS server is authoritative for that domain (i.e. which server contains the actual DNS records). Basically, NS records tell the Internet where to go to find out a domain's IP address. An SOA (start of authority) record stores important information about a domain or zone such as the email address of the administrator, when the domain was last updated, and how long the server should wait between refreshes. All DNS zones need an SOA record in order to conform to IETF standards. Proceed to create the www "A record" and route this to S3.
- Add another "A record" for the non-www domain Once done, you should now have the following records Since your domain now points to S3, it should already load once viewed from a browser You should note that this website is served from AmazonS3 when you view the response headers for Server. Since this website is unsecure, we also need to setup CloudFront and attach a TLS certificate. Request a Public Certificate in ACM
- Go to the ACM service. Make sure to request or import the certificate in the US East (N.Virginia) region.
- Follow the wizard to request a public certificate Add both www and non-www in the fully qualified domain names
- Choose DNS validation as the validation method because it is easier. Tags can be skipped, then click on Request. It will bring you to this page showing pending validation. Click on create records. The canonical name (CNAME) record is used in lieu of an A record, when a domain or subdomain is an alias of another domain. A CNAME Record is used in the Domain Name System (DNS) to create an alias from one domain name to another domain name.
- From ACM, you can simply click on Create records in Route53 to automatically add the CNAME records without having to copy and paste them there. If you open Route53, those records should have been added. In ACM, the certificate state should change from Pending Validation to Issued. Setup CloudFront
- Go to AWS Console and click on CloudFront
- Add the details in the wizard. Origin in the source of the distribution, so add the S3 path there. Note that we need two distributions (one for www and one for non-www).
- Add the CNAME record
- In the same wizard, add the ACM certificate Also, select redirect http to https. Everything else should be default so click on Create distribution. Do the same steps for the non-www domain If you reload the page in a browser, you'll see that content is being served by S3, but there's a response header saying it's now via a CloudFront distribution.
- Update the Route53 www and non-www records to point now to CloudFront When you reload the page, it will now show as secure If you do any file changes in S3 and it doesn't load in the browser, it's because CloudFront is service cached content. Invalidate it to force it to pull again from S3.
Create a Deployment Pipeline (Extra!)
As a DevOps Engineer, I have to discuss more about pipelines so we'll explore more about Code Pipeline which is another AWS service.
- Create a pipeline in Code Pipeline. Go through the wizard mostly with defaults. Also create a new service role. Select the following options
- For the source stage, since I've setup my code in GitHub, I will set it as my source
- We also need to setup a connection to GitHub Select GitHub here Add a name to the connection and click on Connect There should be a one-time setup for the verification of the connection Select the repositories that you want to allow AWS access to It should set up a code for you, then click on Connect to complete the wizard
- Proceed to the other steps of the wizard Note: I skipped the build stage for now since I am building the files locally.
- In the deploy stage, select Amazon S3 as the provider
- Review the pipeline in the final step Then create! The pipeline will be created, but will show up with red marks If you click the error, it shows the reason that the S3 bucket does not allow ACLs yet This can be edited in S3 via the Management Console ...and clear up the red marks in the pipeline
Test the Pipeline
- Update the files of your static website with something as simple as updating the paragraph
- Test the change locally
- Run
npm build
to create new static files - Push the code in GitHub
- The pipeline should get triggered with the push action
- The files will be uploaded to S3
- Invalidate the distribution cache again to see the changes reflect This now completes the static website architecture with a pipeline for continuous deployment!
There is a Part 2 of this article that I wrote where the manual process of the architecture build-up is done with Terraform Code. Follow the link if you want to check that out!
Top comments (0)