DEV Community

Raphael Jambalos
Raphael Jambalos

Posted on • Updated on

Deploy Rails in Amazon ECS: Part 1 - Concepts

This is the first part of the Deploy Rails in Amazon ECS post. It's part of a broader series called More than 'hello world' in Docker. The series will get you from hello world in Docker to having your application deployed in AWS.

There is no shortage of tutorials on how to get started on Docker. However, when you start to set it up for production use, there aren’t a lot of tutorials that go in-depth on the things you have to do to get your app from displaying hello world to preparing it to be battered by traffic in production. This series is a deep dive into developing and deploying dockerized applications to AWS.

In the last post, we made a simple Rails 5 application that runs on top of Docker and Docker Compose. We will use that application here. If you decided to skip the last post, here is a copy of the source code to get you started. If you're starting on Docker, I highly suggest starting from the previous post. It contains basic Docker core concepts that are fundamental to understanding the concepts in this series.

For this post, we will learn about the concepts required to get started on ECS.

0 | What is ECS and Why use it?

AWS Elastic Container Service (ECS) is a high-performance container orchestration service that supports Docker. It allows you to run containerized applications on AWS. It is also AWS's homemade competitor to Google's Kubernetes. AWS integrated ECS with a lot of its other services, making it the obvious choice when you are running most of your workloads on AWS.

Why do we need an orchestration service?

Simple containerized applications are easy to handle. If you had one container for your app - that's all you have to manage.

However, production workloads aren't like that. The baseline capacity for a moderately busy workload is often at least 5 containers. You have to make sure 5 containers are running at any one time. Sometimes, these containers aren't in the same instance - but spread across 2-3 instances. In this setup, deployments become a pain. You have to go to multiple instances, run X copies of the new version, test it manually, and then turn off the new ones.

You also have to manage how you're going to have multiple containers in one instance. For traditional deployments, you'd have an application server in an instance, then you'd point that to port 80, then voila, you're done! It's now accessible via HTTP. But what if you have two containers across many instances? You cannot directly point 2 containers in one port 80. So you'd probably use 1 server to load balance the traffic among containers across different instances. Even then, you'd have to manually register and deregister each container to that server.

Done this way, the maintenance work required just to make the infrastructure run doesn't justify moving to Docker. I'd seriously rather do a traditional deployment.

Much of that cost, though, is taken away with ECS as an orchestrator service:

  • It provisions instances for you. The instances it deploys are ready for use. You just say how many instances you'd like to have and what kind.
  • ECS handles how to deploy containers across your fleet of EC2 instances.
  • It integrates with the load balancing service of AWS so you don't have to manage your own load balancer. It also handles registering healthy instances to the load balancer so traffic can be routed to the container. Conversely, it handles deregistering unhealthy instances so traffic won't be routed to defective containers.
  • It handles deployments for you, using health checks to make sure your new version passes some sort of test before traffic is routed to it. Otherwise, it won't route traffic to those new instances.
  • ECS is free! You just pay for the resources your app is using.

1 | Concepts

Docker Registry

The first thing you need to do is push your Docker image to a Docker registry. A Docker registry is very similar to GitHub, but for Docker images. You can pull, push, and tag images. Tags are names we give to different versions of an image. You can push as many versions of your images as you like in an image repository as long as they have different tags (i.e. v0.0.1 and v0.0.2). If they have the same tag (both tagged as v0.0.1), the later version will override the earlier version.

The most prominent registry out there is Docker Hub, containing most of the base images we use for building our images. Docker Hub offers unlimited free public images, but charges for storing private images. AWS has its own private registry: Elastic Container Registry (ECR). It currently charges $0.1/GB/month. It also charges for the traffic going out of ECR and into your instances (i.e when your container pulls an image from ECR). We will be using ECR for this tutorial.

Task Definition

A Task Definition is a JSON file that defines a task and the containers inside it. Tasks are logical groupings of containers (i.e. an "image processing task" can have 1 Sidekiq container, 1 web container, and 1 Redis container). Containers in a task must be run together. In the image processing example, scaling the web container requires you to scale the Sidekiq and Redis containers as well.

I consider it best practice to have only one container per task - so you can scale the different parts of your application independently.

The task definition contains information like:

  • How much CPU and memory should a task have?
  • Define the containers inside a task:
    • How much CPU and memory does the container consume? It must not exceed the task's CPU and memory allocation. If there are multiple containers in a task, the sum of their CPU and memory consumptions must not exceed those allocated to the task.
    • How do you manage logs across containers?
    • What Docker image will the container be using?
    • What environment variables should be visible to the container?
    • What command will we use to start the container? (for Rails, that's rails s to turn on the application server)

Since this configuration changes over the lifetime of an application, task definitions can contain multiple versions called a revision.

ECS Service

A Service guarantees you always have X number of tasks running at all times. When a container dies, it will spawn one back up for you. The service is also responsible for deciding which instance, among your fleet of EC2 instances to deploy to.

When it spawns a container, it will make sure the container will be registered to the Application Elastic Load Balancer (ALB) attached to it (if there is any). An ALB redirects traffic across the many containers you have. Once a container is registered to the ALB, traffic can flow to it.

There are two types of services: ECS-EC2 and ECS-Fargate.

ECS-EC2 services run the container on top of EC2 instances that you have access to. ECS-Fargate services run containers on top of EC2 instances that are managed by AWS. You have no access to these EC2 instances. We will be using ECS-EC2 for this post.

ECS Cluster

A cluster is a collection of services and tasks. When running ECS-EC2 services, a cluster is also a collection of EC2 instances. You set the number and type of EC2 instances that you want to provision. The sum of the memory units and CPU units of each of these EC2 instances determines how many containers you can provision inside your cluster.

For example, we made a cluster for our Image Processing app. It has 2 ECS-EC2 services: a web service that renders the website, and an image processing service that resizes an image. The cluster also has 3 c5.large instances, each having a 2vCPU (2,048 CPU Units) and 4GiB of memory. This gives us a total capacity of 6,144 CPU Units and 12GiB of memory. This capacity is shared between all ECS-EC2 services in the cluster.

Suppose our web service needs 1,024 CPU Units and 2GB of RAM per task, and our image processing service needs 2,048 CPU Units and 4GB of RAM per task. We can fit 2 image processing tasks and 2 web service tasks in these 3 c5.large instances. If we decide to add 1 more web service task, we won't be able to anymore because all 3 c5.large instances have their resources dedicated to the 4 other tasks.

What’s next?

In the next post, we will set up our AWS account and push an image to AWS’s image registry, Elastic Container Registry (ECR).

Top comments (1)

Collapse
 
yassinehamdouni profile image
Yassine Hamdouni

This tutorial helped me launch a production server in (relatively) no time, all the while actually helping me understand what was happening. Raphael is extremely knowledgeable and that is reflected in this fantastic tutorial – thanks for putting this together!