DEV Community

Cadu Ribeiro
Cadu Ribeiro

Posted on • Originally published at cadu.dev on

Easy deploy your Docker applications to AWS using ECS and Fargate

In this post, I will try to demonstrate how you can deploy your Docker application into AWS using ECS and Fargate.

As an example, I will deploy this app to ECS. The source can be found here.

I will use Terraform to spin the infrastructure so I can easily track everything that I create as a code. If you want to learn the basics of Terraform, please read my post about it.

ECS

What is ECS?

The Elastic Container Service (ECS) is an AWS Service that handles the Docker containers orchestration in your EC2 cluster. It is an alternative for Kubernetes, Docker Swarm, and others.

ECS Terminology

To start understanding what ECS is, we need to understand its terms and definitions that differs from the Docker world.

  • Cluster: It is a group of EC2 instances hosting containers.
  • Task definition: It is the specification of how ECS should run your app. Here you define which image to use, port mapping, memory, environments variables, etc.
  • Service: Services launches and maintains tasks running inside the cluster. A Service will auto-recover any stopped tasks keeping the number of tasks running as you specified.

Fargate

Fargate is a technology that allows running containers in ECS without needing to manage the EC2 servers for cluster. You only deploy your Docker applications and set the scaling rules for it. Fargate is an execution method from ECS.

Show me the code

The full example is on Github.

The project structure

Our Terraform project is composed of the following structure:

├── modules

│ └── code_pipeline

│ └── ecs

│ └── networking

│ └── rds

├── pipeline.tf

├── production.tf

├── production_key.pub

├── terraform.tfvars

└── variables.tf

  • Modules are where we will store the code that handles the creation of a group of resources. It can be reused by all environments (Production, Staging, QA, etc.) without needing to duplicate a lot of code.
  • production.tf is the file that defines the environment itself. It calls the modules passing variables to it.
  • pipeline.tf Since the pipeline can be a global resource without needing to isolate per environment. This file will handle the creation of this pipeline using the code_pipeline module.

First part, the networking

The branch with this part can be found here.

The first thing that we need to create is the VPC with 2 subnets (1 public and 1 private) in each Availability Zone. Each Availability Zone is a geographically isolated region. Keeping our resources in more than one zone is the first thing to achieve high availability. If one physical zone fails for some reason, your application can answer from the others.

Our networking

Keeping the cluster on the private subnet protects your infrastructure from external access. The private subnet is allowed only to be accessed from resources inside the public network (In our case, will be the Load Balancer only).

This is the code to create this structure (it is practically the same from my introduction post of Terraform):

https://medium.com/media/3000fa3a6cfa0ccdb9678d3e4660424d/href

The above code creates the VPC, 4 subnets (2 public and 2 private) in each Availability zone. It also creates a NAT to allow the private network access the internet.

The Database

The branch with this part can be found here.

We will create a RDS database. It will be located on the private subnet. Allowing only the public subnet to access it.

https://medium.com/media/59d7deafef35b800131d6d8061ad1e82/href

With this code, we create the RDS resource with values received from the variables. We also create the security group that should be used by resources that want to connect to the database (in our case, the ECS cluster).

Ok. Now we have the database. Let’s finally create our ECS to deploy our app \o/.

Take Three: The ECS

The branch with this part can be found here.

We are approaching the final steps. Now, it is the part that we define the ECS resources needed for our app.

The ECR repository

The first thing is to create the repository to store our built images.

https://medium.com/media/356acc348a4487a6434c6e60919f7c62/href

The ECS cluster

Next, we need our ECS cluster. Even using Fargate (that doesn’t need any EC2), we need to define a cluster for the application.

https://medium.com/media/007965a6291dc6a49086c14471cdd92f/href

The tasks definitions

Now, we will define 2 task definitions.

The tasks definitions are configured in a JSON file and rendered as a template in Terraform.

This is the task definition of the web app:

https://medium.com/media/cd914e0e8af12d8d338da2acd81e85d2/href

In the file above, we are defining the task to ECS. We pass the created ECR image repository as variable to it. We also configure other variables so ECS can start our Rails app.

The definition of the DB migration task is almost the same. We only change the command that will be executed.

The load balancers

Before creating the Services, we need to create the load balancers. They will be on the public subnet and will forward the requests to the ECS service.

https://medium.com/media/3033baa23cf2903207443201d7762619/href

In the file above we define that our target group will use HTTP on port 80. We also create a security group to allow access into the port 80 from the internet. After, we create the Application Load Balancer and the listener. To use Fargate, you should use an Application Load Balancer instead an Elastic Load Balancer.

Finally, the ECS service

Now we will create the service. To use Fargate, we need to specify the lauch_type as Fargate.

https://medium.com/media/9a7598bcb9d94c58ca13ca1be88a7d69/href

Auto-scaling

Fargate allows us to auto-scale our app easily. We only need to create the metrics in CloudWatch and trigger to scale it up or down.

https://medium.com/media/0163c1beacb8ade34e9f09209dd80916/href

We create 2 auto scaling policies. One to scale up and other to scale down the desired count of running tasks from our ECS service.

After, we create a CloudWatch metric based on the CPU. If the CPU usage is greater than 85% from 2 periods, we trigger the alarm_action that calls the scale-up policy. If it returns to the Ok state, it will trigger the scale-down policy.

The Pipeline to deploy our app

Our infrastructure to run our Docker app is ready. But it is still boring to deploy it to ECS. We need to manually push our image to the repository and update the task definition with the new image and update the new task definition. We can run it through Terraform, but it could be better if we have a way to push our code to Github in the master branch and it deploys automatically for us.

Entering, CodePipeline and CodeBuild.

CodePipeline is a Continuous Integration and Continuous Delivery service hosted by AWS.

CodeBuild is a managed build service that can execute tests and generate packages for us (in our case, a Docker image).

With it, we can create pipelines to delivery our code to ECS. The flow will be:

  • You push the code to master’s branch
  • CodePipeline gets the code in the Source stage and calls the Build stage (CodeBuild).
  • Build stage process our Dockerfile building and pushing the Image to ECR and triggers the Deploy stage
  • Deploy stage updates our ECS with the new image

Let’s define our Pipeline with Terraform:

https://medium.com/media/68275343cb24ec50baf32ce6cc1002e3/href

In the above code, we create a CodeBuild project, using the following buildspec (build specifications file):

https://medium.com/media/40583c74d4308338360de35bb99a1921/href

We defined some phases in the above file.

  • pre_build: Upgrade aws-cli, set some environment variables: REPOSITORY_URL with the ECR repository and IMAGE_TAG with the CodeBuild source version. The ECR repository is passed as a variable by Terraform.
  • build: Build the Dockerfile from the repository tagging it as LATEST in the repository URL.
  • post_build: Push the image to the repository. Creates a file named imagedefinitions.json with the following content: ‘[{“name”:”web”,”imageUri”:REPOSITORY_URL”}]’ This file is used by CodePipeline to upgrade your ECS cluster in the Deployment stage.
  • artifacts: Get the file created in the last phase and uses as the artifact.

After, we create a CodePipeline resource with 3 stages:

  • Source: Gets the repository from Github (change it by your repository information) and pass it to the next stage.
  • Build: Calls the CodeBuild project that we created in the step before.
  • Production: Gets the artifact from Build stage (imagedefinitions.json) and deploy to ECS.

Let’s see they working together?

Running all together

The code with the full example is here.

Clone it. Also, since we use Github as the CodePipeline source provider, you need to generate a token to access the repositories. Read here to generate yours.

After generating your token, export it as an environment variable.

$ export GITHUB\_TOKEN=YOUR\_TOKEN

Now, we need to import the modules and the provider library.

$ terraform init

Now, let the magic begin!

$ terraform apply

it will display that Terraform will create some resources, and if you want to continue

Type yes.

Terraform will start create our infraestructure.

Seriously, get a coffee until it finishes.

AWESOME!. Our infrastructure is ready!!. If you enter in your CodePipeline at AWS Dashboard, you can see that it also triggered the first build:

Wait until all the Stages are green.

Get your Load Balancer DNS and check the deployed application:

$ terraform output alb\_dns\_name

It is working \o/

Finally, the app is running. Almost magic!

If you have any questions about it, contact me. This was just an introduction post about ECS with Fargate using Terraform.

Cheers 🍻


Top comments (0)