Introduction
Until now, I have been manually building the infrastructure for my applications. This time, I challenged myself to manage the infrastructure using Infrastructure as Code (IaC).
By utilizing AWS CDK, I was able to codify complex infrastructure settings, manage them simply, and enable efficient deployments. In this article, I will introduce how to host an application using IaC with AWS CDK.
Overview of the Application Architecture
The application I created this time has the following structure. I designed it to efficiently run both the frontend and backend using Docker images on AWS.
- Frontend: React + TypeScript
- Backend: NestJS + TypeScript Below is a diagram of the application's architecture:
Infrastructure Design
First, I needed to create a secure network configuration using a VPC (Virtual Private Cloud). Therefore, I set up public and private subnets within the VPC.
I placed an ALB (Application Load Balancer) in the public subnet and the backend and frontend servers managed by ECS (Elastic Container Service) Fargate in the private subnet. This ensures that the servers are not directly exposed to the outside, enhancing security.
Users can access the application via the Internet Gateway. Access is received through the ALB placed in the public subnet. The ALB is designed to route requests to the appropriate services and perform load balancing.
For the database, I decided to use DynamoDB to store data. The ECS tasks in the private subnet are configured to access DynamoDB via VPC endpoints. This allows communication with DynamoDB without going through the internet, improving communication security and performance.
The Docker images used by the servers are stored in ECR (Elastic Container Registry). ECS uses these images to run the application. Additionally, by obtaining images from S3 and ECR via VPC endpoints, it is possible to use resources within the private network.
Implementation
Next, I will translate the above infrastructure design into code.
Setup
First, set up the authentication information using AWS CLI:
aws configure
Then, proceed with the CDK setup:
npm install -g aws-cdk
mkdir cdk && cd cdk # Create a directory of your choice and move into it
cdk init app --language typescript
cdk bootstrap aws://${AWS_ACCOUNT_ID}/${AWS_REGION_NAME}
This automatically generates the files necessary for CDK settings.
Prepare Docker images for both the frontend and backend and store them in ECR.
Writing the CDK Content
Modify cdk-stack.ts
Since the CdkStack class is already created, write the above design into its constructor. I want to create various environments such as testing and production, so I add environment
as an argument to the constructor for future use.
export class CdkStack extends cdk.Stack {
constructor(
scope: Construct,
id: string,
environment: string, // Added
props?: cdk.StackProps
) {
This CdkStack
class is called in cdk.ts, so modify the code in cdk.ts
to make environment usable:
#!/usr/bin/env node
import * as cdk from "aws-cdk-lib";
import "source-map-support/register";
import { CdkStack } from "../lib/cdk-stack";
const app = new cdk.App();
const environment = app.node.tryGetContext("env"); // Obtain from command line during deployment
new CdkStack(app, `${environment}CdkStack`, environment);
app.synth();
From here, write the content inside the constructor of the CdkStack class in cdk-stack.ts.
First, create a VPC. Since we need private and public subnets this time, write as follows:
const vpc = new ec2.Vpc(this, `${environment}AppVpc`, {
vpcName: `${environment}AppVpc`,
maxAzs: 2,
natGateways: 1,
subnetConfiguration: [
{
name: `${environment}AppPublicSubnet`,
subnetType: ec2.SubnetType.PUBLIC,
cidrMask: 24,
},
{
name: `${environment}AppPrivateSubnet`,
subnetType: ec2.SubnetType.PRIVATE_ISOLATED,
cidrMask: 24,
},
],
});
Regarding the Internet Gateway, since we are creating a public subnet, it is automatically created even though it is not explicitly mentioned in the code.
Next, create VPC endpoints that the private subnet will use:
// VPC Endpoint for DynamoDB
vpc.addGatewayEndpoint(`${environment}DynamoDbEndpoint`, {
service: ec2.InterfaceVpcEndpointAwsService.DYNAMODB,
});
// VPC Endpoint for S3 (necessary for obtaining Docker images)
vpc.addGatewayEndpoint(`${environment}S3Endpoint`, {
service: ec2.GatewayVpcEndpointAwsService.S3,
});
// VPC Endpoint for ECR (endpoint to send requests)
vpc.addInterfaceEndpoint(`${environment}EcrApiEndpoint`, {
service: ec2.InterfaceVpcEndpointAwsService.ECR,
subnets: { subnetType: ec2.SubnetType.PRIVATE_ISOLATED },
});
// VPC Endpoint for ECR Docker (endpoint to obtain Docker images)
vpc.addInterfaceEndpoint(`${environment}EcrDockerEndpoint`, {
service: ec2.InterfaceVpcEndpointAwsService.ECR_DOCKER,
subnets: { subnetType: ec2.SubnetType.PRIVATE_ISOLATED },
});
// VPC Endpoint for CloudWatch Logs (for logging)
vpc.addInterfaceEndpoint(`${environment}CloudWatchLogsEndpoint`, {
service: ec2.InterfaceVpcEndpointAwsService.CLOUDWATCH_LOGS,
subnets: { subnetType: ec2.SubnetType.PRIVATE_ISOLATED },
});
All connections are with the private subnet, but for DynamoDB and S3, we can use addGatewayEndpoint (gateway-type endpoints)
, so specifying the subnet is unnecessary.
Next, create ECS, which is necessary for container management:
const cluster = new ecs.Cluster(this, `${environment}AppCluster`, {
clusterName: `${environment}AppCluster`,
vpc: vpc,
});
Create an IAM role for Fargate to use:
const taskExecutionRole = new iam.Role(
this,
`${environment}TaskExecutionRole`,
{
// Ensure only ECS tasks can assume this role
assumedBy: new iam.ServicePrincipal("ecs-tasks.amazonaws.com"),
}
);
// Attach necessary access permissions
taskExecutionRole.addManagedPolicy(
iam.ManagedPolicy.fromAwsManagedPolicyName(
"service-role/AmazonECSTaskExecutionRolePolicy"
)
);
Create a Fargate task definition for the frontend:
const frontendTaskDef = new ecs.FargateTaskDefinition(
this,
`${environment}AppFrontendTaskDef`,
{
family: `${environment}AppFrontendTaskDef`,
memoryLimitMiB: 512,
cpu: 256,
executionRole: taskExecutionRole,
}
);
Since we are using ECR, specify the image repository for the frontend (unnecessary if using Docker Hub):
const frontendRepository = ecr.Repository.fromRepositoryName(
this,
`${environment}frontendRepository`,
"app-react-nginx-image"
);
Add container information to the task definition created above. This time, we specify the image repository:
const frontendContainer = frontendTaskDef.addContainer(
`${environment}AppFrontendContainer`,
{
containerName: `${environment}AppFrontendContainer`,
image: ecs.ContainerImage.fromEcrRepository(
frontendRepository,
"latest"
),
logging: new ecs.AwsLogDriver({
streamPrefix: "Frontend", // Prefix for logging
}),
}
);
frontendContainer.addPortMappings({
containerPort: 80,
protocol: ecs.Protocol.TCP,
});
The containerPort
refers to the port for access from the ALB to Fargate, so it can remain as 80 regardless of whether the app itself is hosted with HTTPS.
Next, launch a Fargate service for the frontend server:
const frontendService = new ecs.FargateService(
this,
`${environment}AppFrontendService`,
{
serviceName: `${environment}AppFrontendService`,
cluster: cluster,
taskDefinition: frontendTaskDef,
assignPublicIp: false,
vpcSubnets: {
subnetType: ec2.SubnetType.PRIVATE_ISOLATED,
},
}
);
// Configure auto-scaling
const scalingFrontend = frontendService.autoScaleTaskCount({
minCapacity: 1,
maxCapacity: 5,
});
// Auto-scale when CPU utilization exceeds 50%
scalingFrontend.scaleOnCpuUtilization("CpuScalingFrontend", {
targetUtilizationPercent: 50,
});
// Auto-scale when memory utilization exceeds 70%
scalingFrontend.scaleOnMemoryUtilization("MemoryScalingFrontend", {
targetUtilizationPercent: 70,
});
With this, the settings for the frontend server are complete. Next, proceed to set up the backend server. Since it is almost the same as the frontend server, explanations will be brief.
const backendTaskDef = new ecs.FargateTaskDefinition(
this,
`${environment}AppBackendTaskDef`,
{
family: `${environment}AppBackendTaskDef`,
memoryLimitMiB: 512,
cpu: 256,
executionRole: taskExecutionRole,
}
);
const backendRepository = ecr.Repository.fromRepositoryName(
this,
`${environment}backendRepository`,
"app-nestjs-image"
);
const backendContainer = backendTaskDef.addContainer(
`${environment}AppBackendContainer`,
{
containerName: `${environment}AppBackendContainer`,
image: ecs.ContainerImage.fromEcrRepository(
backendRepository,
"latest"
),
logging: new ecs.AwsLogDriver({
streamPrefix: "Backend",
}),
}
);
backendContainer.addPortMappings({
containerPort: 3000,
protocol: ecs.Protocol.TCP,
});
const backendService = new ecs.FargateService(
this,
`${environment}AppBackendService`,
{
serviceName: `${environment}AppBackendService`,
cluster: cluster,
taskDefinition: backendTaskDef,
assignPublicIp: false,
vpcSubnets: {
subnetType: ec2.SubnetType.PRIVATE_ISOLATED,
},
}
);
const scalingBackend = backendService.autoScaleTaskCount({
minCapacity: 1,
maxCapacity: 5,
});
scalingBackend.scaleOnCpuUtilization("CpuScalingBackend", {
targetUtilizationPercent: 60,
});
scalingBackend.scaleOnMemoryUtilization("MemoryScalingBackend", {
targetUtilizationPercent: 75,
});
Next, create the ALB:
const loadBalancer = new elbv2.ApplicationLoadBalancer(
this,
`${environment}AppApplicationLoadBalancer`,
{
vpc,
loadBalancerName: `${environment}AppApplicationLoadBalancer`,
internetFacing: true,
}
);
Connect the ALB to resources in the private subnet:
const FrontendTargetGroup = new elbv2.ApplicationTargetGroup(
this,
`${environment}AppFrontendTG`,
{
targetGroupName: `${environment}AppFrontendTG`,
vpc,
port: 80,
protocol: elbv2.ApplicationProtocol.HTTP,
targets: [frontendService],
healthCheck: {
path: "/",
},
}
);
const backendTargetGroup = new elbv2.ApplicationTargetGroup(
this,
`${environment}AppBackendTG`,
{
targetGroupName: `${environment}AppBackendTG`,
vpc,
port: 3000,
protocol: elbv2.ApplicationProtocol.HTTP,
targets: [backendService],
healthCheck: {
path: "/v1/health", // Specify the API for health checks
},
}
);
Describe the ALB's request handling:
const listener = loadBalancer.addListener(`${environment}HttpListener`, {
port: 80,
open: true,
defaultAction: elbv2.ListenerAction.forward([FrontendTargetGroup]),
});
listener.addTargetGroups(`${environment}BackendTargetGroups`, {
targetGroups: [backendTargetGroup],
priority: 1,
conditions: [elbv2.ListenerCondition.pathPatterns(["/v1/*"])],
});
This time, we are using a single ALB to distribute traffic to the frontend and backend. By default, requests are forwarded to the frontend, but if there is access with /v1, it is set to forward to the backend Fargate. If you are using two ALBs, the addTargetGroups configuration is unnecessary.
Finally, create the necessary DynamoDB table. In DynamoDB, only the PartitionKey is mandatory, but you can include SortKey, Index, and TTL as needed.
const usersTable = new dynamodb.Table(this, `${environment}UsersTable`, {
tableName: `${environment}-users`,
partitionKey: {
name: "account_id",
type: dynamodb.AttributeType.STRING,
},
});
usersTable.grantReadWriteData(backendTaskDef.taskRole);
Now that we have written the code necessary for IaC, we just need to execute and verify it.
Execution
First, check for syntax errors in the CDK code above:
cdk synth --context env=dev
If there are errors, they will appear as follows, so fix them as needed:
Error: Validation failed with the following errors:
[undefinedCdkStack/undefinedAppFrontendTargetGroup] Target group name: "undefinedAppFrontendTargetGroup" can have a maximum of 32 characters.
[undefinedCdkStack/undefinedAppBackendTargetGroup] Target group name: "undefinedAppBackendTargetGroup" can have a maximum of 32 characters.
If you can execute without errors, proceed to deploy. If the Docker image is not functioning properly, Fargate will repeatedly start up, encounter errors, shut down, and restart. If it is taking an excessive amount of time, please check the AWS Management Console.
cdk deploy --context env=dev
With this, hosting the application using IaC is complete.
Conclusion
This was my first attempt at IaC, and by managing the infrastructure as code, I found it very beneficial to build infrastructure for multiple environments such as production and development without human errors.
If this article was helpful in any way, I would be encouraged if you could press the like button! Thank you for reading to the end.
Top comments (0)