Introduction
AWS offers great support for running apps inside docker containers. The whole management cycle is made easy and the real difference is that it offers running containers in serverless environment. Using Fargate, we can spin up and down number of containers without having a host infrastructure.
ECS with Fargate will give you all the tools you need to build, deploy and maintain docker containers using serverless technology.
This article will guide you through all the steps you need to take to run fully functional ASP.NET 6 application in ECS environment. I know that running the app is not good enough for modern needs, so at the end of the article, there are also explanations of how to set up HTTPS with SSL certificate on load balancer and CI/CD pipeline. With that, you will have full development cycle covered in ECS. The only thing you will then need is to write code!
Create a Dockerfile
Dockerfile example:
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build-env
WORKDIR /app
# Copy everything
COPY . ./
# Restore as distinct layers
RUN dotnet restore
# Build and publish a release
RUN dotnet publish -c Release -o out
# Build runtime image
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "SampleProject.Presentation.dll"]
This example is valid if Dockerfile is added to the same directory where the solution file resides.
Build an image using Dockerfile*
- This step can be skipped if you are directly building an image for ECR
To build a docker image is a necessary step, since we will be using docker image to start a container in any environment. The command that we want to use is docker build
Example for building the image would be:
docker build -f SampleProject/Dockerfile -t sample-project-image ./SampleProject
-f flag stands for path to Dockerfile
-t will tell the Docker how we want to name our image
The only parameter in docker build
command is PATH
or URL
. It gives the context about image, or to be precise, set of files necessary to build an image. For more information check out docker build
docs page.
Push the image to Repository (ECR)
Docker images must be stored in repository from which we can access it to run a container. In our case, we will use AWS ECR (Elastic Container Registry).
- To start, create a repository.
- Go to AWS Console and search for Elastic Container Registry service.
- Locate Create a repository button and click on it.
- Choose if you want it to be Public facing or Private, give it a name, leave all other settings as they are and click Create repository button
Now go to Repositories tab on the left and select your new repository via radio button on the left side
Now, above the list of repos, there should be a button called View push commands. It will give you the instructions for pushing the Docker image to ECR based on OS you are using
After all steps are completed successfully, you should be able to see your image in the ECR repository you created earlier.
Create Task definition
Task definition defines containers and their settings as well as it specifies which containers are included in your task and how they interact with each other. You can also specify data volumes for your containers to use.
To start go to AWS Console and search for ECS:
- On the left side, choose Task definitions
- Click on Create new Task definiton
- For launch type compatibility choose Fargate and click Next step button
- Give your task a name
- Task should have Task role
- This is just IAM role, which can be created directly from this page by clicking on the link below the input field
- This role will be created with AmazonECSTaskExecutionRolePolicy policy.
- This is fine for the most part, but we want to have Logging to CloudWatch from our CI/CD pipeline later on. To be able to log to a different AWS service, we will need to add access policy for that particular service.
- Add CloudWatchLogsFullAccess policy to newly created IAM Role
- You are all set now!
- Operating system family should be Linux
- For Task execution IAM role choose the same Role you created in a step above
- For Task size you can pick a value that is best for you, but for now, letβs choose 0.5GB for memory and 0.5 vCPU for Task CPU
- Now it is time to add Container definitions
- Click on Add container button. This will toggle a new modal
- Give your container a name
- For Image, copy the value of your image link from ECR repository
- Add soft limit to be the same as Task memory (500mb)
- For port mappings, you can add HTTP and HTTPS ports for example (80 and 443)
- In advanced container configuration section you can fine tune your container. Add environment files and variables, open volumes if necessary etc.
- Regarding our app, this is not necessary, so we will go straight to creation, so click Add button
- You can leave other settings after container definitions unchanged. Of course, for other cases, these other settings can be tuned.
Create Service based on Task
Service represents the place for long running tasks to live. It is part of the Cluster in ECS.
First create a cluster in ECS.
For cluster template use Networking only, since we want out cluster to be using Fargate.
Give cluster a name on the next screen, leave everything else as default and click Create.
Go into your new cluster if you are not redirected automatically.
Locate Services tab.
On Services tab, click on Create button.
- For launch type, choose Fargate
- Operating system family should be Linux
- For Task definition, use the definition we created earlier
- Give your service a meaningful name
- Service type should be REPLICA
- Add number of tasks you want to run by default (number of containers to be running all the time). For testing purposes, leave insert 1
- Minimum health capacity should be 0. This means we are OK to have 0 tasks running while doing a deployment
- Minimum health capacity should be 100. This means we are OK to have 1 task running max when we are deploying new version
- Use default settings for rest and click Next step
In the next step, we are setting up VPC (virtual private cloud) settings and Load Balancer. You will need to create a Load balancer first (explained below). After you create the load balancer, do the following:
- Now select VPC and for subnets, include all 3
- Next we should setup is load balancing
- For Load balancer type choose Application Load Balancer
- In Load balancer name dropdown, choose the balancer we created earlier
- You should now see the container from task definition in Container to load balance section
- Click on Add to load balancer button to set up the container
- For production listener port use 80:HTTP from dropdown (we created this port listener when we created load balancer)
- For target group name choose the group we created when creating load balancer
- Now, click on Next step
- On Set Auto Scaling page, just click Next step
- Now you can rewiew your service, once you are happy with everyhing, click Create service
- If the service is created successfully, click on View service button, and check out how the service is booted up and how tasks are starting to run.
Congratulations, you should now have the running service in place!
Create Load balancer
Load balancer will be used as an entry point to our application. It is also responsible to send the request to one of the ECS service tasks.
To create a load balancer:
- Go to EC2 instance
- Locate Load Balancers section on the left and enter it
- Click Create Load Balancer button
- Choose Application Load Balancer as a type
- Give your ALB a name
- In Network mappings section check all 3 mappings
- Create new Secutity group for ALB and open ports 80 and 443, then choose it from the dropdown
- In Listeners and Routing section Add HTTP listener for port 80
- You will need to create a target group to forward to
- Use Create target group link below the dropdown field. It will redirect you to page to create a new target group
- Choose IP addresses for target type
- Give group a name and leave everything else as default
- Click on Next button
- On the next page leave everything as it is and click Create target group
- Now the target group can be selected from the dropdown
- After that, click Create load balancer button and that is it!
HTTPS Specifics
To be able to use HTTPS with our new load balancer, we will need to have SSL certificate present in ACM (Amazon certificate manager).
You can either get the certificate from Amazon (free), or import your own certificate.
More info here:
Requesting a public certificate
Once you have your verified certificate in ACM, go to load balancers in EC2 service:
- Select your load balancer
- Locate and go to Listeners tab
- Click on Add Listener button
- For protocol, choose HTTPS
- For action choose Forward and find your target group in dropdown
- In secure listener settings for Default SSL/TLS certificate choose your certificate from a dropdown
- Now click Add
Now, you have to add CNAME DNS record in your DNS settings. Use domain you used for setting up ACM certificate and value of the record should be DNS address of your load balancer.
Evrything is set up now, but HTTP to HTTPS redirection is not yet in place.
To do this, again, go to load balancers, select your load balancer and go to Listeners tab.
- In the Listeners table locate Rules column
- For HTTP:80 listener click on View/Edit rules in Rules column
- Click on Edit rules tab on the next page
- Click on Edit rule (pencil icon) button next to Listener name
- In Then section, edit the rule from Forward to Redirect to
- Select HTTPS and 443 for redirect values
- Click on circled checkmark icon to save the changes
Now, all the HTTP requests will be automatically redirected to our secure listener
CI/CD pipeline
We will be using Gitlab CI to set up the pipeline. The process will automate everything we did above.
For pipeline to be successful, there should be initial setup. This means, there must be cluster, service and task definition created beforehand.
CI/CD pipeline for this particular case is very easy to set up since we are only going to use 2 template files.
Variable fields will be presented in format, and for each variable there will be example at the end.
First template file is aws-ecs.json, which is used to configure AWS ECS service from code
{
"executionRoleArn": "<execution-role-arn>",
"containerDefinitions": [{
"memoryReservation": 1024,
"environment": [{
"name": "ASPNETCORE_ENVIRONMENT",
"value": "Test"
}],
"name": "<cluster->",
"mountPoints": [],
"image": "<image-link>",
"essential": true,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-create-group": "true",
"awslogs-region": "<region>",
"awslogs-stream-prefix": "ecs",
"awslogs-group": "<log-group>"
}
},
"portMappings": [{
"hostPort": 80,
"protocol": "tcp",
"containerPort": 80
}]
}],
"requiresCompatibilities": [
"FARGATE"
],
"networkMode": "awsvpc",
"cpu": "512",
"taskRoleArn": "<execution-role-arn>",
"memory": "1024",
"family": "aws-ecs"
}
Variables, examples and definitions:
- unique identifier for Task execution role (found in IAM role)
arn:aws:iam::123482667567:role/ecsTaskExecutionRole
- name of the cluster you created for this pipeline
sample-project-image
- unique image link found in ECR repository
123482667567.dkr.ecr.eu-west-2.amazonaws.com/sample-project-image
- region where you want your logs to reside
eu-west-2
- name of the log group in CloudWatch where the logs about this cluster will reside
/ecs/sample-project-logs
Settings like portMappings, cpu, memory and family can be changed based on your applications needs, but I wonβt be putting those as variables.
Second template file is .gitbal-ci.yml. It is the file that tells the Gitlab CI how to run the pipeline.
stages:
- publish
variables:
EXAMPLE_REPOSITORY_URL: <repo-url>
REGION: <region>
EXAMPLE_TASK_DEFINTION_NAME: <task-definition-name>
EXAMPLE_CLUSTER_NAME: <cluster-name>
EXAMPLE_SERVICE_NAME: <service-name>
EXAMPLE_DESIRED_COUNT: <task-count>
publish:
stage: publish
image: docker:latest
services:
- docker:18-dind
before_script:
- apk add --no-cache curl jq python3 py3-pip
- pip install awscli
- date
- docker version
script:
- $(aws ecr get-login --no-include-email --region "${REGION}")
- echo "Building image"
- docker build -f <path-to-Dockerfile> -t <image-name> <path-to-root-of-project>
- echo "Tagging image"
- docker tag <image-name>:latest "${EXAMPLE_REPOSITORY_URL}"
- echo "Pushing image to ECR"
- echo "${EXAMPLE_REPOSITORY_URL}"
- docker push "${EXAMPLE_REPOSITORY_URL}"
- aws ecs register-task-definition --region "${REGION}" --family "${EXAMPLE_TASK_DEFINTION_NAME}" --cli-input-json file://aws-ecs.json
- aws ecs update-service --region "${REGION}" --cluster "${EXAMPLE_CLUSTER_NAME}" --service "${EXAMPLE_SERVICE_NAME}" --task-definition "${EXAMPLE_TASK_DEFINTION_NAME}" --desired-count "${EXAMPLE_DESIRED_COUNT}"
only:
- release/testing
Variables, examples and definitions:
- url to the image found in ECR repository
123482667567.dkr.ecr.eu-west-2.amazonaws.com/sample-project-image:latest
- Region where ECS should be set up
eu-west-2
- Name of the Task Definition you created earlier
sample-task-definition
- Name of the Cluster you created earlier
sample-project-cluster
- Name of the Service in Cluster you created earlier
sample-project-cluster
- Number of Tasks you want to run after deployment
1
- Number of Tasks you want to run after deployment
SampleProject/Dockerfile
- Number of Tasks you want to run after deployment
sample-project-image
- Number of Tasks you want to run after deployment
./SampleProject
Top comments (0)