DEV Community

Preston "Brady" Adger
Preston "Brady" Adger

Posted on

Diving into AWS ECS

Over the last month I've been working on a side project for one of my close friends. I wanted to take the opportunity to refamiliarize myself with an old API (Mapbox) as well as learn a new hosting strategy through AWS. For this use project, I would only need to host less than a handful of services as follows:

  • Web Server
  • API Server
  • Postgres Instance
  • Redis

Getting Started

If you've never delved into the world of AWS, it can be quite daunting. There are several paths from which to pick from to create your cloud. As this application is in its infancy (and may stay there for good), I don't want to overengineer my stack by adding a ton of load balancers in between the client and the application servers, or between the servers themselves as I can't image there would be much traffic at all to warrant such an architecture.

However I would like to learn how to manage such a microservice architecture, so I opted to use Amazon's Elastic Container Service which is Amazon's fully managed and highly scalable container service. On a high level here were the steps I took to get my web application available to the world.

1). Containerize my application by creating a Dockerfile

2). Create an Elastic Container Registery instance that would contain my application image.

3). Build my image on my machine and push that image to the ECR instance.

4). Create a Cluster instance on ECS.

5). Define a task definition. These are the crucial instructions that will let my ECS Service how to build my container and subsequent tasks.

6). Create the service. Creating a service spawns 1 or more tasks (as defined by your task definition). A task is what your application runs on. You can either use an EC2 instance (amazons virtual machine), or FARGATE (Amazons serverless... server?).

Automated Deployments?

Now I have a task successfully running; HOORAY! Nothing could possibly go wrong whenever I try to build and push another image up to my ECR instance, right? riiiighht? Wrong. If you're new to the world of containers like I am, I didn't know about revisions. Revisions (in the context of AWS) are instances of your task definitions, that I explained previously. A revision includes your container along with the image it is supposed to run. This revision is then attached to a service and that is what tells the service how to build its tasks. Whenever you push a new image to your ECR, your ECS doesn't automatically start running this new image; You need to create a new revision and update your service to use it. Rough.

So how do I do this automatically?

Load Balancers

After some research I opted to use a GitHub action that would allow me to push my code to a branch of my choosing, and upon receiving the updated code, my branch would automatically build my image, push it to AWS, and create a new revision, AND assign it the service. Voila. The service received the
new revision and built the new task. The only problem is the new task has a new IP address... shoot. Everytime I make a deployment my IP address of my web service is changing. To fix this final issue I created a load balancer that sits between the client and the web server(s). AWS makes it pretty easy to create a load balancer, create a target group, and the targets themselves.

Now my web application is accessed through the load balancer and the load balancer looks at the ECS Service and its children (tasks) and determines which children are healthy and can have traffic routed to them.

Conclusion

It's one thing to conceptualize how to implement your services and an entirely new beast to dive in and start implementing. I learned more about containerization, CI/CD, DevOps, and system administration in two days than I ever could have by just looking at how other people have designed their stacks or by reading. So get out there and learning something new!

Top comments (0)