DEV Community

Cover image for ECS Orchestration Part 1: Network
Daniele Baggio
Daniele Baggio

Posted on • Updated on

ECS Orchestration Part 1: Network

This is the first post in the ECS Orchestration series. In this part we begin by discussing the ECS network, which is a crucial topic when it comes to containerised applications.
An orchestrator such as ECS is typically used to manage microservices or other systems consisting of several applications using Docker containers. One of the main advantages of using Docker is the possibility of hosting multiple containers on a single server.
When networking containers on the same server, it is important to choose the appropriate network type to effectively manage the containers according to specific requirements.
This article examines the main options of network with ECS and their advantages and disadvantages.

Host mode

Using host mode, the networking of the container is tied directly to the underlying host that's running the container. This approach may seem simple, but it is important to consider the following:
When the host network mode is used, the container receives traffic on the specified port using the IP address of the underlying host Amazon EC2 instance.
There are significant drawbacks to using this network mode. You can’t run more than a single instantiation of a task on each host. This is because only the first task can bind to its required port on the Amazon EC2 instance. There's also no way to remap a container port when it's using host network mode.

Host port mapping

An example of task definition with host network:

  {
    "essential": true,
    "networkMode": "host"
    "name": "myapp",
    "image": "myapp:latest",
    "portMappings": [
      {
        "containerPort": 8080,
        "hostPort": 8080,
        "protocol": "tcp"
      }
    ],
    "environment": [],
     ....
  }
Enter fullscreen mode Exit fullscreen mode

Bridge mode

With bridge mode, you're using a virtual network bridge to create a layer between the host and the networking of the container. This way, you can create port mappings that remap a host port to a container port. The mappings can be either static or dynamic.

1. Static port mapping

With a static port mapping, you can explicitly define which host port you want to map to a container port.
If you wish to manage only the traffic port on the host, static mapping might be a proper solution. However, this still has the same disadvantage as using the host network mode. You can't run more than a single instantiation of a task on each host.
This is a problem when an application needs to auto scaling, because a static port mapping only allows a single container to be mapped on a specific host port. To solve this problem, consider using the bridge network mode with a dynamic port mapping.

Static port mapping

An example of task definition with bridge network and static port mapping:

  {
    "essential": true,
    "networkMode": "bridge"
    "name": "myapp",
    "image": "myapp:latest",
    "portMappings": [
      {
        "containerPort": 8080,
        "hostPort": 8080,
        "protocol": "tcp"
      }
    ],
    "environment": [],
     ....
  }
Enter fullscreen mode Exit fullscreen mode

2. Dynamic port mapping

You can specify a dynamic port binding by not specifying a host port in the port mapping of a task definition, allowing Docker to pick an unused random port from the ephemeral port range and assign it as the public host port for the container. This means you can run multiple copies of a container on the host. You can also assign each container its own port on the host. Each copy of the container receives traffic on the same port, and clients sending traffic to these containers use the randomly assigned host ports.

Dynamic port mapping

An example of task definition with bridge network and dynamic port mapping:

  {
    "essential": true,
    "networkMode": "bridge"
    "name": "myapp",
    "image": "myapp:latest",
    "portMappings": [
      {
        "containerPort": 8080,
        "hostPort": 0, <-- Dynamic port allocation by Docker
        "protocol": "tcp"
      }
    ],
    "environment": [],
     ....
  }
Enter fullscreen mode Exit fullscreen mode

So far so good, but one disadvantage of using the bridge network with dynamic port mapping is the difficulty in establishing communication between services. Since services can be assigned to any port, it is necessary to open wide port ranges between hosts. It is not easy to create specific rules so that a particular service can only communicate with another specific service. Services do not have specific ports that can be used for security group network rules.

Awsvpc mode

With the awsvpc network mode, Amazon ECS creates and manages an Elastic Network Interface (ENI) for each task and each task receives its own private IP address within the VPC. This ENI is separate from the underlying hosts ENI. If an Amazon EC2 instance is running multiple tasks, then each task’s ENI is separate as well.
The advantage of using awsvpc network mode is that each task can have a separate security group to allow or deny traffic. This means you have greater flexibility to control communications between tasks and services at a more granular level.
This means that if there are services that need to communicate with each other using HTTP or RPC protocols, we can manage the connection more easily and flexibly.

Awsvpc port mapping

An example of task definition with awsvpc network:

  {
    "essential": true,
    "networkMode": "awsvpc"
    "name": "myapp",
    "image": "myapp:latest",
    "portMappings": [
      {
        //The container gets its own ENI. 
        // Which means that your container will act as a host a the port that you expose will be the port that you serve on.
        "containerPort": 8080,
        "protocol": "tcp"
      }
    ],
    "environment": [],
     ....
  }
Enter fullscreen mode Exit fullscreen mode

But when using the awsvpc network mode there are a few challenges you should be mindful of, infact every EC2 instances can allocate a limited range of ENI. This means that it's not possible to execute more container of the maximum limit of EC2 ENI. This behavior has an impact when an application needs to be auto scaled, infact the auto scaling can create another new host instance(EC2) to perform a tasks placement. This behaivior can potentially increases costs and wastes computational power.

How can one avoid this behavior?

When choose a ECS network mode like awsvpc and need to increase number of allocable ENI on the EC2 instance managed from the cluster , it's possible to enable awsvpcTrunking.
Amazon ECS supports launching container instances with increased ENI density using supported Amazon EC2 instance types. When you use these instance types, additional ENIs are available on newly launched container instances. This configuration allows you to place more tasks using the awsvpc network mode on each container instance.
You can enable the awsvpcTrunking in account setting with the AWS CLI:

aws ecs put-account-setting-default \
      --name awsvpcTrunking \
      --value enabled \
      --profile <YOUR_PROFILE_NAME> \
      --region <YOUR_REGION>
Enter fullscreen mode Exit fullscreen mode

If you want to view your container instances with increased ENI limits with the AWS CLI:

aws ecs list-attributes \
      --target-type container-instance \
      --attribute-name ecs.awsvpc-trunk-id \
      --cluster <YOUR_CLUSTER_NAME> \
      --region <YOUR_REGION> \
      --profile <YOUR_PROFILE_NAME>
Enter fullscreen mode Exit fullscreen mode

It is important to know that not all EC2 instance types support awsvpcTrunking and certain prerequisites must be met to utilize this feature.
Please refer to the official documentation for further information.
Another thing to keep in mind that when using ENI trunking, is that each Amazon EC2 instance requires two IP addresses. One IP address for the primary ENI and another for the ENI trunk. In addition, also ECS activities on the instance require an IP address.
If you need very large scaling, there is a risk of running out of available IP addresses. This could cause Amazon EC2 startup errors or task startup errors. These errors occur because ENIs cannot add IP addresses within the VPC if no IP addresses are available.
To avoid this problem, make sure that the CIDR ranges of the subnet meet the requirements.

If using the Fargate launch type, the awsvpc is the only network mode supported

Conclusions
We have seen how the choice of network type for container orchestration on ECS affects the scalability and connectivity of the various services within the cluster. Depending on the type of network chosen, there are different behaviours that can bring advantages or disadvantages depending on the use case.
For a microservices application managed by ECS, awsvpc is probably the best network to choose because it allows you to easily scale your application and easily implement service-to-service communications.

Top comments (0)