DEV Community

Cover image for Cloud Resume Challenge Summary
Cory M
Cory M

Posted on • Updated on

Cloud Resume Challenge Summary

About the challenge

This project was completed in accordance with the Cloud Resume Challenge, which was created by @forrestbrazeal.

Certification

The first requirement of this challenge is to be AWS Cloud Practitioner certified, which I achieved in April 2022. After completing this certification, I began studying for the AWS Certified Solutions Architect which led to me discovering this challenge.

AWS Cloud Practitioner

Links to GitHub Repositories

Backend
Frontend

Contents:

1 - Website Architecture
2 - Why Use Serverless?
3 - Infrastructure as Code
4 - Source Control
5 - Continuous Integration/Continuous Delivery
6 - Lambda Function in Python
7 - Building the Frontend
8 - Creating a Test
9 - Conclusion

1 - Website Architecture

Cloud Resume Challenge diagram

The website architecture is designed as follows:

  1. A user requests the webpage via their browser.
  2. The browser requests the IP from Route 53, which then resolves the DNS to the nearest CloudFront edge location.
  3. CloudFront then forwards the request to the S3 bucket containing the website static files, and retrieves its contents.
  4. The S3 bucket containing the frontend code is protected with Origin Access Identity (OAI), preventing access not routed through CloudFront.
  5. The website runs JavaScript code which sends a GET request to API Gateway, to retrieve the current number of visitors stored in the DynamoDB database.
  6. API Gateway forwards this request to lambda as JSON.
  7. Lambda then identifies the type of request, performs a get/put operation on DynamoDB to retrieve and incrementally increase the number of visitors stored.
  8. The returned visitor count is then processed and displayed on the website.
  9. GitHub repositories of both frontend and backend data provide code version control, and CI/CD through GitHub Actions.
  10. Terraform deploys the AWS infrastructure as detailed in the backend repository.

2 - Why Use Serverless?

Before I can explain why serverless was used, it is important to explain what serverless is. Serverless is not a new concept in computing, essentially how bygone mainframes and terminals worked. Its like that jacket you still have from high school, its so old that it has come back into fashion. Serverless computing is a development model in which code is executed on servers that have been abstracted from the user, meaning that the management and operation of the servers is external to the developers and users. An analogy would be a copy center: you don't own the printers, copiers, or fax machines, you simply pay for what you use in order to finish your task.

The reasons to use serverless are a combination of cost, scalability, availability, and performance. As the resources to execute code is not provisioned prior to use, the cost is only for the compute time that is actually used. Amazon manages the provision, operation, and scaling to execute workload and meet demand. In my case, the custom domain is an annual cost of $12, Route 53 is $0.50 monthly, KMS is $0.29 monthly, Lambda and DynamoDB are very far below usage levels that would incur fees.

3 - Infrastructure as Code

Infrastructure as Code is the use of definition files to provision and manage AWS infrastructure, instead of manually using the console or CLI to build and configure the needed infrastructure. I chose to use Terraform rather than AWS SAM tool, as Terraform is cross-platform and allows me to integrate additional technologies such as Kubernetes and virtual machines.

Before getting started it is important to take care of the following aspects:

  1. Security: Terraform needs to have permission to deploy infrastructure on AWS. A user was created with access only to STS (Secure Token Service). A role was created with the IAM permissions needed to perform actions on the required AWS services.
  2. Terraform State: A remote backend was configured using a S3 bucket to store the Terraform state file and DynamoDB was used to store the state lock. This configuration ensures that the infrastructure declared in .tf files, and the infrastructure deployed match.

The following AWS infrastructure was deployed via Terraform:

  1. A private S3 bucket hosting the website frontend code.
  2. A new table was created in DynamoDB with on-demand capacity and a primary key ID. A default item was created to store a value for the number of website visitors.
  3. A Lambda function using Python runtime, the code of which was uploaded from GitHub.
  4. An API Gateway configured as Lambda integration, which sends GET requests from CORS-compliant source to Lambda as JSON, and Lambda responds with JSON.

4 - Source Control

I chose to use GitHub to maintain control of the frontend and backend repositories. GitHub allows all additions, revisions, and deployments to be tracked and reviewed.

5 - Continuous Integration/Continuous Delivery

Continuous Integration/Continuous Delivery (CI/CD) is the method of automating and continually monitoring throughout the lifecycle of application, from integration and testing to delivery. To achieve this, I used GitHub Actions, which defines the process with YAML files. Any changes to the GitHub repositories triggers a new commit, which will deploy or modify the existing design.

6 - Lambda Function in Python

The Lambda function was written in Python code. This code uses the Boto3 SDK to interact with DynamoDB, specifically to read the visitor count table, and to incrementally increase the count value. In order for Lambda to have access to DynamoDB, IAM role permissions needed to be assigned. To secure access to the Lambda function, Cross-Origin Resource Sharing (CORS) requires that the source of GET must be the website. While I had some initial difficulties getting CORS Access-Control-Allow-Origin headers to function correctly, using * was not an option as it creates a large security risk.

7 - Building the Frontend

The frontend code was built using HTML, CSS, and JavaScript. The files are contained in a S3 bucket, which were deployed from the GitHub repository.

8 - Creating a Test

To test the functionality of the API, I wrote a Cypress test. This test sends the API a GET request, and evaluates the response type and to have a 200 status code.

9 - Conclusion

I found this challenge to be a great experience. There were many opportunities to learn the interactions between different AWS services, how to describe my design to Terraform, and how to implement CORS. I am deeply thankful to @forrestbrazeal for creating this wonderful challenge. I believe that this challenge has definitely improved my understanding of a myriad of AWS services, and given me direct experience in using Terraform and CI/CD.

Top comments (0)