DEV Community

Cover image for Setup containerized Application in AWS ECS - Part 3/3
Camille He
Camille He

Posted on

Setup containerized Application in AWS ECS - Part 3/3

In the previous Part 2/3, I introduced how to setup a ECS Cluster in AWS ECS. Now, we will dive into next and also the final topic - Setup ECS Service related resources in AWS ECS.

⚓ Amazon ECS Concepts


The AWS resources in green box are created in this part, which include:

ECS Task Definition & ECS Service

A task definition is a blueprint for your application. It is a text file in JSON format that describes the parameters and one or more containers that form your application.

ECS service is used to run and maintain a specified number of instances of a task definition simultaneously in an Amazon ECS cluster.

ECS Task definition is the core component for your containerization application. All the container's related parameters are defined in container_definitions parameter of resource aws_ecs_task_definition. For example, the docker image, resource (CPU, Memory) allocation, log configuration, container environment variables, etc.

ALB & Target Group & Listener

A load balancer serves as the single point of contact for clients. The load balancer distributes incoming application traffic across multiple targets.

A listener checks for connection requests from clients, using the protocol and port that you configure.

Each target group routes requests to one or more registered targets, such as EC2 instances, using the protocol and port number that you specify.

Now we have some containers running in the ECS Cluster. You can access a specific container via the public IP address of the EC2 instance that the container locates with the host port, however it's not recommended. Instead, we use an ALB to route traffic to the containers. All containers for a specific ECS service are managed by a target group. With Load Balancer Listener, Load Balancer can forward traffic to target group. Here is the diagram of relationship among load balancer, listener and target group.


Application Auto Scaling Target & Policy

Automatic scaling is the ability to increase or decrease the desired count of tasks in your Amazon ECS service automatically. Amazon ECS leverages the Application Auto Scaling service to provide this functionality. Amazon ECS Service Auto Scaling supports the following types of automatic scaling: Target tracking, Step and Scheduled. In the project, I chose Target tracking.

With target tracking scaling policies, you select a metric and set a target value. In the project, I chose metric ECSServiceAverageCPUUtilization and scalable dimension ecs:service:DesiredCount with target value 75. Which means when the average CPU utilization of AWS ECS service is greater than 75%, new tasks will be launched (scale out) to meet the requirements, and vice versa. The automate scaling is monitored by CloudWatch alarm. If you go to the CloudWatch Alarm console, you will find two alarms are created as below.


You provide the peak point (75) and AWS calculates the valley point (67.5) according to peak point.

Project Source Code

GitHub source code


All Terraform related source code is in terraform directory. Go through the README documentation for the details. Setup local environment if you want to deploy these AWS resources from local machine, or you can use GitHub Actions workflows.

Currently you can use the Docker image However you can build your own Docker image for the Strapi application and push to registry following the GitHub README.


Validate the task is running successfully from AWS ECS console.

Then, you can access the Strapi application via ALB DNS name, like Here is the portal of Strapi application.


And you can use admin panel as below to do some funny things after sign-up.


📚 References

Always appreciate for your ideas and comments. Thanks for reading! 😄

Top comments (0)