ECS Fargate, EFS, Aurora Serverless V2, CloudFront e S3
Pré-requisitos
Domínio público configurado no Route 53
$
Variáveis
# Environment
variable "tags" {
description = "AWS Tags to add to all resources created."
type = map(string)
default = {
Terraform = "true"
Owner = "SRE Team"
Env = "dev"
}
}
variable "aws_region" {
description = "AWS Region (e.g. us-east-1, us-west-2, sa-east-1, us-east-2)"
default = "us-west-2"
}
variable "azs" {
description = "AWS Availability Zones (e.g. us-east-1a, us-west-2b, sa-east-1c, us-east-2a"
default = [
"us-west-2a",
"us-west-2b"
]
}
variable "env_prefix" {
description = "Environment prefix for all resources to be created. e.g. customer name"
default = "wordp"
}
variable "environment" {
description = "Name of the application environment. e.g. dev, prod, staging."
default = "dev"
}
variable "site_domain" {
description = "The primary domain name of the website."
default = "wordpress.domain.pro"
}
variable "public_alb_domain" {
description = "The public domain name of the ALB."
default = "alb.domain.pro"
}
variable "cf_price_class" {
description = "The price class for this distribution. One of PriceClass_All, PriceClass_200, PriceClass_100."
default = "PriceClass_100"
}
variable "error_ttl" {
description = "The minimum amount of time (in secs) that cloudfront caches an HTTP error code."
default = "30"
}
variable "desired_count" {
description = "The number of instances of fargate tasks to keep running."
default = "1"
}
variable "log_retention_in_days" {
description = "The number of days to retain cloudwatch logs."
default = "1"
}
# VPC parameters
variable "vpc_cidr" {
description = "The VPC CIDR block."
default = "10.0.0.0/16"
}
variable "public_subnet_cidrs" {
description = "List of CIDR blocks for public subnets."
default = ["10.0.32.0/20", "10.0.48.0/20"]
}
variable "private_subnet_cidrs" {
description = "List of CIDR blocks for private subnets."
default = ["10.0.64.0/20", "10.0.80.0/20"]
}
# Database parameters
variable "db_backup_retention_days" {
description = "Number of days to retain database backups."
default = "1"
}
variable "db_backup_window" {
description = "Daily time range during which automated backups for rds are created if automated backups are enabled using the BackupRetentionPeriod parameter. Time in UTC."
default = "05:00-07:00"
}
variable "db_max_capacity" {
description = "The maximum Aurora capacity unit."
default = "2.0"
}
variable "db_min_capacity" {
description = "The minimum Aurora capacity unit."
default = "1.0"
}
variable "db_name" {
description = "Database name."
default = "wordp"
}
variable "db_master_username" {
description = "Master username of db."
default = "wordp"
}
variable "db_master_password" {
description = "Master password of db."
}
variable "db_engine_version" {
description = "The database engine version."
default = "8.0.mysql_aurora.3.02.0"
}
# Task parameters
variable "task_memory" {
description = "The amount (in MiB) of memory used by the task."
default = 2048
}
variable "task_cpu" {
description = "The number of cpu units used by the task."
default = 1024
}
variable "scaling_up_cooldown" {
description = "The amount of time, in seconds, after a scaling activity completes and before the next scaling activity can start (upscaling)."
default = "60"
}
variable "scaling_down_cooldown" {
description = "The amount of time, in seconds, after a scaling activity completes and before the next scaling activity can start (downscaling)."
default = "300"
}
variable "scaling_up_adjustment" {
description = " The number of tasks by which to scale, when the upscaling parameters are breached."
default = "1"
}
variable "scaling_down_adjustment" {
description = " The number of tasks by which to scale (negative for downscaling), when the downscaling parameters are breached."
default = "-1"
}
variable "task_cpu_low_threshold" {
description = "The CPU value below which downscaling kicks in."
default = "30"
}
variable "task_cpu_high_threshold" {
description = "The CPU value above which downscaling kicks in."
default = "75"
}
variable "max_task" {
description = "Maximum number of tasks should the service scale to."
default = "3"
}
variable "min_task" {
description = "Minimum number of tasks should the service always maintain."
default = "2"
}
# S3 parameters
variable "bucket_name" {
description = "S3 Bucket for wordpress assets."
default = "bucket-wordp-assets"
}
Construção
- Validação do domínio:
- Execute a stack e** 70 recursos** serão criados na conta:
terraform init
terraform plan
terraform apply
- Aqui já podemos verificar que o site está disponível e basta seguir o padrão de inicialização do wordpress:
- Para utilizarmos bem os recursos do Amazon CloudFront (CDN) neste cenário wordpress, podemos recorrer aos plugins que permitem utilizar o bucket criado em nossa stack como repositório para os assets (conteúdos estáticos, como imagens):
Wordpress plugin (https://deliciousbrains.com/wp-offload-media/)
Wordpress plugin (https://deliciousbrains.com/wp-offload-media/)
Concluindo o passo 4 da construção, o objetivo principal já foi alcançado. Vamos agora entender alguns dos recursos provisionados na stack e seu comportamento.
RDS Aurora Mysql Serverless
As credenciais são armazenadas no AWS Systems Manager Parameter Store através desse bloco terraform:
resource "aws_ssm_parameter" "db_master_user" {
name = "/${var.env_prefix}/${var.environment}/db_master_user"
type = "SecureString"
value = var.db_master_username
tags = var.tags
}
resource "aws_ssm_parameter" "db_master_password" {
name = "/${var.env_prefix}/${var.environment}/db_master_password"
type = "SecureString"
value = var.db_master_password
tags = var.tags
}
ECS (Fargate)
Provisonamos um serviço com duas tasks:
Olhando nos detalhes das tasks onde estamos executando os containers wordpress, podemos validar as variáveis de ambiente de carregamos através da task definiton criada no terraform, também o volume EFS que montamos:
resource "aws_ecs_task_definition" "this" {
family = "${var.env_prefix}-${var.environment}"
execution_role_arn = aws_iam_role.task_execution_role.arn
task_role_arn = aws_iam_role.task_role.arn
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
cpu = var.task_cpu
memory = var.task_memory
container_definitions = <<CONTAINER_DEFINITION
[
{
"secrets": [
{
"name": "WORDPRESS_DB_USER",
"valueFROM": "${aws_ssm_parameter.db_master_user.arn}"
},
{
"name": "WORDPRESS_DB_PASSWORD",
"valueFROM": "${aws_ssm_parameter.db_master_password.arn}"
}
],
"environment": [
{
"name": "WORDPRESS_DB_HOST",
"value": "${aws_rds_cluster.this.endpoint}"
},
{
"name": "WORDPRESS_DB_NAME",
"value": "${var.db_name}"
}
],
"essential": true,
"image": "wordpress",
"name": "wordpress",
"portMappings": [
{
"containerPort": 80
}
],
"mountPoints": [
{
"containerPath": "/var/www/html",
"sourceVolume": "efs"
}
],
"logConfiguration": {
"logDriver":"awslogs",
"options": {
"awslogs-group": "${aws_cloudwatch_log_group.wordpress.name}",
"awslogs-region": "${var.aws_region}",
"awslogs-stream-prefix": "app"
}
}
}
]
CONTAINER_DEFINITION
volume {
name = "efs"
efs_volume_configuration {
file_system_id = aws_efs_file_system.this.id
}
}
}
CloudFront
Na distribuição do cloudfront, criamos duas origins. Uma delas é o ALB público e a outra o bucket S3 para os *assets *do wordpress:
Em behaviors, foi determinado que a ação padrão é encaminhar a requisição diretamente para o ALB que por sua vez direciona para as tasks ECS. Caso a requisição tenha como path pattern o repositório padrão dos assets wordpress, será direcionada para o S3 onde este conteúdo estático está sendo distribuído globalmente através do cloudfront:
# Cache behavior with precedence 1
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD", "OPTIONS"]
target_origin_id = "alb"
forwarded_values {
query_string = true
headers = ["*"]
cookies {
forward = "all"
}
}
viewer_protocol_policy = "redirect-to-https"
min_ttl = 0
default_ttl = 0
max_ttl = 0
compress = true
}
# Cache behavior with precedence 0
ordered_cache_behavior {
path_pattern = "/wp-content/uploads"
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD", "OPTIONS"]
target_origin_id = "s3_assets"
forwarded_values {
query_string = true
headers = ["Host"]
cookies {
forward = "all"
}
}
min_ttl = 900
default_ttl = 900
max_ttl = 900
compress = true
viewer_protocol_policy = "redirect-to-https"
}
S3
A estrutura de diretórios do bucket é criada através do plugin utilizado anteriormente. Vale lembrar que, também é anexada uma policy no bucket permitindo acesso do *cloudfront *para distribuição:
data "aws_iam_policy_document" "s3_assets" {
statement {
actions = ["s3:GetObject"]
resources = ["${module.s3_bucket.s3_bucket_arn}/*"]
principals {
type = "Service"
identifiers = ["cloudfront.amazonaws.com"]
}
condition {
test = "StringEquals"
variable = "AWS:SourceArn"
values = [aws_cloudfront_distribution.this.arn]
}
}
}
resource "aws_s3_bucket_policy" "s3_assets" {
bucket = module.s3_bucket.s3_bucket_id
policy = data.aws_iam_policy_document.s3_assets.json
}
Route 53
Os registros abaixo são inseridos na zona DNS existente, fazendo referência ao ALB, CloudFront e certificado ACM:
Happy building!
Top comments (0)