DEV Community

Taha Yağız Güler
Taha Yağız Güler

Posted on

Cloud Resume Challenge (AWS)

Cloud Resume Challenge is a project that helps us learn to use the cloud and some essential tools. You can access the outline of the project here Cloud Resume Challenge.

In this blog post, I will tell you about the steps I completed and challenges I went through the Cloud Resume Challenge. This challenge made me more familiar with using AWS services with Terraform. I learned about how the services work. I also learned and experienced a lot of things that I can't think of right now.

You can see the final result here and see the GitHub code.

You can complete this challenge as part of the AWS Free tier.
You only need to buy a domain name, but if you are a student there are many ways to buy it for free. In the next part of the article, I will talk about how to get a free domain name.

Projecy Diagram

Now, let's take a look at which stages we need to complete.

Challenge Steps

  • Build a website in HTML/CSS.

This step is very easy. Simply create a resume page for your resume website using html and css.

  • Host website with S3 Bucket.

I'm build with Terraform, but you can use AWS SAM if you want.

I created an S3 bucket and determined the necessary CORS rule, and at the end, I ensured that the objects were uploaded to the S3 bucket collectively.

resource "aws_s3_bucket" "cloud-resume-bucket" {
  bucket = var.bucket_name
  acl    = "public-read"
  policy = file("website/policy.json")

  website {
    index_document = "index.html"
    error_document = "error.html"
  }
}


resource "aws_s3_bucket_cors_configuration" "s3_bucket_cors" {
  bucket = aws_s3_bucket.cloud-resume-bucket.id

  cors_rule {
    allowed_headers = ["*"]
    allowed_methods = ["GET", "POST"]
    allowed_origins = ["*"]
    max_age_seconds = 10
  }
}


resource "aws_s3_object" "test" {
  for_each = fileset("${path.module}/html", "**/*.*")
  acl    = "public-read"
  bucket = var.bucket_name
  key    = each.value
  source = "${path.module}/html/${each.value}"
  content_type  = lookup(var.mime_types, split(".", each.value)[length(split(".", each.value)) - 1])
}
Enter fullscreen mode Exit fullscreen mode
  • CloudFront for routing HTTP/S traffic.

S3 website URL should use HTTPS for security. I did this with Cloudfront.

resource "aws_cloudfront_distribution" "s3_cf" {
  origin {
    domain_name              = "${aws_s3_bucket.cloud-resume-bucket.bucket_regional_domain_name}"
    origin_id                = "${local.s3_origin_id}"
  }

  enabled             = true
  is_ipv6_enabled     = true
  default_root_object = "index.html"


  custom_error_response {
      error_caching_min_ttl = 0
      error_code = 404
      response_code = 200
      response_page_path = "/error.html"
  }

  aliases = [var.domain_name]

  default_cache_behavior {
    allowed_methods  = ["GET", "HEAD"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = "${local.s3_origin_id}"

    forwarded_values {
      query_string = false

      cookies {
        forward = "none"
      }
    }

    viewer_protocol_policy = "redirect-to-https" #redirect-to-https
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }

  restrictions {
    geo_restriction {
      restriction_type = "none"
    }
  }

  # viewer_certificate {
  #   cloudfront_default_certificate = true
  # }

  viewer_certificate {
    acm_certificate_arn = aws_acm_certificate_validation.acm_val.certificate_arn
    ssl_support_method = "sni-only"
    minimum_protocol_version = "TLSv1.2_2021"
  }

}

Enter fullscreen mode Exit fullscreen mode
  • Route53 for custom DNS.

In this step, I registered my domain name to the Route53 service as seen below.
(With the Github student package, you can get a free domain name.)

resource "aws_route53_zone" "main" {
  name = var.domain_name
}

resource "aws_route53_record" "domain" {
  zone_id = "${aws_route53_zone.main.zone_id}"
  name = "${var.domain_name}"
  type = "A"

  alias {
    name = "${aws_cloudfront_distribution.s3_cf.domain_name}"
    zone_id = "${aws_cloudfront_distribution.s3_cf.hosted_zone_id}"
    evaluate_target_health = false
  }
}

resource "aws_route53_record" "cert_validation" {
  for_each = {
    for dvo in aws_acm_certificate.cert.domain_validation_options : dvo.domain_name => {
      name   = dvo.resource_record_name
      record = dvo.resource_record_value
      type   = dvo.resource_record_type
    }
  }

  allow_overwrite = true
  name            = each.value.name
  records         = [each.value.record]
  ttl             = 60
  type            = each.value.type
  zone_id         = aws_route53_zone.main.zone_id
}
Enter fullscreen mode Exit fullscreen mode
  • Certificate Manager for enabling secure access with SSL Certificate.

I set SSL certificate with the ACM service.

resource "aws_acm_certificate" "cert" {
  domain_name       = var.domain_name
  validation_method = "DNS"

  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_acm_certificate_validation" "acm_val" {
  certificate_arn         = aws_acm_certificate.cert.arn
  validation_record_fqdns = [for record in aws_route53_record.cert_validation : record.fqdn]
}

Enter fullscreen mode Exit fullscreen mode
  • DynamoDB for database, storing website visitor count.

I created a DynamoDB database to retrieve and update the visitor counter.

resource "aws_dynamodb_table" "visiters" {
  name           = var.dynamodb_table
  billing_mode   = "PROVISIONED"
  read_capacity  = 1
  write_capacity = 1
  hash_key       = "id"

  attribute {
    name = "id"
    type = "N"
  }
}
Enter fullscreen mode Exit fullscreen mode
  • Lambda function (python) to read/write website visitor count to DynamoDB.

After setting it up to work with database. I added the Lambda function I wrote with Python(boto3) and the necessary IAM policy.

data "archive_file" "lambda_zip" {
  type = "zip"

  source_dir  = "${path.module}/src"
  output_path = "${path.module}/src.zip"
}    

resource "aws_s3_object" "this" {
  bucket = aws_s3_bucket.cloud-resume-bucket.id

  key    = "src.zip"
  source = data.archive_file.lambda_zip.output_path

  etag = filemd5(data.archive_file.lambda_zip.output_path)
}

//Define lambda function
resource "aws_lambda_function" "apigw_lambda_ddb" {
  function_name = "app"
  description = "visiter counter"

  s3_bucket = aws_s3_bucket.cloud-resume-bucket.id
  s3_key    = aws_s3_object.this.key

  runtime = "python3.8"
  handler = "app.lambda_handler"

  source_code_hash = data.archive_file.lambda_zip.output_base64sha256

  role = aws_iam_role.lambda_exec.arn

  environment {
    variables = {
      DDB_TABLE = var.dynamodb_table
    }
  }


} 

resource "aws_iam_role" "lambda_exec" {
  name_prefix = "LambdaDdbPost"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Action = "sts:AssumeRole"
      Effect = "Allow"
      Sid    = ""
      Principal = {
        Service = "lambda.amazonaws.com"
      }
      }
    ]
  })
}

resource "aws_iam_policy" "lambda_exec_role" {
  name_prefix = "lambda-tf-pattern-ddb-post"

  policy = <<POLICY
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "dynamodb:GetItem",
                "dynamodb:UpdateItem"
            ],
            "Resource": "arn:aws:dynamodb:*:*:table/${var.dynamodb_table}"
        }
    ]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "lambda_policy" {
  role       = aws_iam_role.lambda_exec.name
  policy_arn = aws_iam_policy.lambda_exec_role.arn
}
Enter fullscreen mode Exit fullscreen mode
  • API Gateway to trigger Lambda function.

I set up API Gateway to trigger Lambda Func.

# resource "random_string" "random" {
#   length           = 4
#   special          = false
# }

resource "aws_apigatewayv2_api" "http_lambda" {
  # name          = "${var.apigw_name}-${random_string.random.id}"
  name          = "${var.apigw_name}"
  protocol_type = "HTTP"
}

resource "aws_apigatewayv2_stage" "default" {
  api_id = aws_apigatewayv2_api.http_lambda.id

  name        = "$default"
  auto_deploy = true
}

resource "aws_apigatewayv2_integration" "apigw_lambda" {
  api_id = aws_apigatewayv2_api.http_lambda.id

  integration_uri    = aws_lambda_function.apigw_lambda_ddb.invoke_arn
  integration_type   = "AWS_PROXY"
  integration_method = "POST"
}

resource "aws_apigatewayv2_route" "get" {
  api_id = aws_apigatewayv2_api.http_lambda.id

  route_key = "GET /" 
  target    = "integrations/${aws_apigatewayv2_integration.apigw_lambda.id}"
}

# Gives an external source permission to access the Lambda function.
resource "aws_lambda_permission" "api_gw" {                            
  statement_id  = "AllowExecutionFromAPIGateway"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.apigw_lambda_ddb.function_name
  principal     = "apigateway.amazonaws.com"

  source_arn = "${aws_apigatewayv2_api.http_lambda.execution_arn}/*/*"
}

Enter fullscreen mode Exit fullscreen mode

Then, after setting up the CI/CD process with Github actions, I launched the website. This blog post is just a summary of my work. I faced many difficulties while completing this. The hardest step for me was connecting the Lambda - APIGW - DynamoDB services, but after some thought and research I was able to get around this. Thank you for reading, if you want to reach the codes of the project in detail, you can visit my GitHub account.

My Resume Site
GitHub Link of The Project

Top comments (0)