When it comes to deploying Docker containers on AWS, developers have two choices: Elastic Container Service (ECS) EC2 clusters and Fargate. But which one is right for your application? In this article, I look at the pros and cons of each - and discuss why we recently made a massive change in our own strategy at Tinystacks.
Article By Jay Allen
Docker containers have become so popular because they're a great way to package an application with all of the files, libraries, and configuration it needs to operate properly. On AWS, ECS provides an easy way to deploy, run, and manage Docker containers at any scale.
If you're unfamiliar with ECS, you'll want to check out the AWS documentation for an overview of key concepts. In brief, ECS enables running Docker images by defining services that are comprised of one or more tasks, with each task being a running instance of a specific Docker container.
Of course, running a Docker container requires having machines to run them on. In ECS, this is abstracted into the idea of an ECS cluster, a logical grouping of services and tasks. Developers have two choices in how to create and manage ECS clusters.
The first choice is by creating an Amazon EC2 cluster. In this scenario, you use an Amazon EC2 virtual machine image to create one or more VMs that are hosted in your AWS account. You can then run tasks across the instances of your cluster.
The second, more recent choice is Fargate. With Fargate, the hardware and virtual machines on which your Docker containers run are managed completely by AWS as a "serverless" service.
It shouldn't come as a surprise that, as totally different services - one server-based, one serverless - Fargate and EC2 clusters use different pricing models. With EC2 clusters, you pay for only the EC2 compute capacity and Elastic Block Storage (EBS) capacity that you use. By contrast, Fargate charges for usage on a per-minute basis, with charges varying based on the amount of virtual CPU (vCPU) and memory your containers use.
Fargate and EC2 clusters are different means to the same end: running your Docker containers in a scalable manner. But each can have advantages over the other, depending on your specific scenario.
As with any serverless service, the allure of Fargate comes in ease of management. If you manage your own EC2 clusters, you have to worry about a whole host of operational issues - VM security, operating system patching and maintenance, and uptime. Since Fargate uses capacity managed by AWS, you needn't worry about ensuring EC2 instances remain healthy and secure - AWS does this for you.
Using Fargate can also lead to operational efficiencies. With EC2 clusters, you run two key operational risks: underprovisioning, or not creating enough instances to meet the demands of your workload; and overprovisioning, or overpaying for too much capacity that you end up not using. With Fargate, you only pay for container runtime - never for unused VM capacity.
However, that doesn't mean that Fargate is always the best choice. There are several compelling reasons why you may opt for using EC2 clusters instead.
The key advantage of EC2 clusters is price. While Fargate is easy and convenient, that convenience comes at a cost. Fargate has come under fire from the developer community for being expensive compared to EC2 clusters. Indeed, AWS itself has stated that, the more you can maximize a cluster's vCPU and memory utilization, the more cost-effective EC2 clusters become.
Additionally, EC2 clusters may bring your customers additional peace of mind in terms of security. While AWS works hard to ensure complete isolation of tasks running on Fargate, companies in sensitive industries such as finance and health care may be wary about their workloads running alongside other arbitrary processes.
At TinyStacks, we work hard to provide an end-to-end deployment experience on AWS that frees development teams to focus on their application code - not on DevOps infrastructure. Since all TinyStacks-enabled applications are deployed as Docker containers running on ECS, we're very keen on optimizing our ECS usage for performance, scalability, and cost.
Initially, we used Fargate clusters exclusively for our DevOps stack deployments. However, after running some numbers, we concluded that shifting to our own EC2 clusters might be more cost-effective. We ran some tests using EC2 clusters with Amazon ECS cluster auto scaling enabled, scaling out our clusters when instances were maxed at over 75% CPU utilization for 5 minutes, and scaling in when they were under that threshold for the same amount of time. We also configured ECS service scaling and ensured it was synchronized with cluster scaling.
What we found was pretty astounding: by maximizing cluster utilization, we were able to reduce our ECS spend with EC2 clusters by 40% when compared with Fargate. The smallest cost savings came with larger instances. An EC2 m5.xlarge with 4 vCPU and 16GiB of RAM came out to $138.24/month compared to a similar-sized Fargate cluster, which came out to around $167.7888/month - an 18% cost difference. But the smallest instance size we used - a t3.nano with 2 vCPU and 0.5GiB RAM - was a mere $3.744/month. Compare that to Fargate's smallest instance type, a .5 vCPU, 1GiB instance, which cost us a full $17.7732/month. That's a full 79% cost savings.
Based on these results, we moved all of our ECS workloads from Fargate onto our own EC2 clusters. All of our customers will now receive the benefits of EC2 cluster hosting for ECS including, not just reduced cost, but increased security and scalability. We believed these numerous advantages made the decision a no-brainer.
In short, Fargate definitely has some advantages in terms of ease of use and maintenance. But in terms of cost, EC2 cluster hosting for ECS is by far the clear winner.