DEV Community

Cover image for Deployment Strategies for Auto-Scaling and Load Balancing EC2 Instances in AWS
Ashan Fernando
Ashan Fernando

Posted on

Deployment Strategies for Auto-Scaling and Load Balancing EC2 Instances in AWS

Scalability is one of the most important aspects to consider for modern web application deployments. With the advancement of Cloud technologies, it has never been easier to design solutions for scalability. The elasticity of Cloud provides the basic foundation and flexibility to design practical solutions.

Using AWS, it is possible to provision EC2 instances and pay only for the time of usage and also allowing to automate the scaling out (Creating more instances) for the demand automatically, with tools like Auto Scaling Configurations, Launch Configurations, Load Balancers, Amazon Machine Images (AMIs) and more.

However, there are various factors that involve in building a reliable and efficient scaling platform for your application deployments which is worth to consider. Therefore it is required to design the auto-scaling appropriately to address the following,

  • Scaling efficiently on demand (reducing new instance provisioning time).
  • New deployment support with near zero downtime.
  • Rollback support.
  • Disaster recovery.

Auto-Scaling and Load Balancing Tools in AWS

There are several fundamental services required for autoscaling and load balancing deployments. Most of these services are readily available as managed services from AWS.

  • Load Balancer - Requires to balance the load coming from clients and fairly distribute to the healthy EC2 instances. It also requires to identify unhealthy instances and stop sending the traffic. In AWS, there are multiple types of Load Balancers available (Classic Load Balancer, Application Load Balancer & Network Load Balancer) which can be used for different purposes. However, Application Load Balancer is commonly used for general Web Applications.
  • Auto-Scaling Tools - These tools are needed to identify the limitations of existing capacity and trigger provisioning of new instances (Scale-out) and also to terminate instances (Scale-in) when the load is less. AWS provides a combination of tools to support this including CloudWatch (To monitor), Auto-Scaling Configurations, Launch Configurations (Instructions to bootstrap new instance).

Using Custom AMIs

One of the common approaches is to create an AWS EC2 instance with the required operating system and application inside and create a Custom AMI (Virtual Machine Image). This allows defining the AMI reference in AWS EC2 Launch Configuration to instruct AWS auto-scaling tools to bootstrap new instances using the custom AMI.

Pros

  • Straightforward to configure the provisioning rules.
  • Easy to rollback and update new instances (AWS provides support for rolling updates).
  • Support for deploying a large number of instances.
  • Relatively fast bootstrap process.

Cons

  • Requires to have a relatively complex build process and required to build new AMIs for each update.

However, it is also possible to have a Custom AMI and do minor configurations upon the EC2 instance bootstrap process.

Dynamically Bootstrap EC2 Instances

In this approach, it is possible to use a base AMI provided by Amazon which contains only the operating system and general software. There are several approaches to implement dynamically bootstrapping EC2 instances for an application deployment.

Use a Base AMI from Amazon and bootstraps its application when the instance is starting using the AWS Launch Configurations.
Create a Custom AMI that is capable of loading a specified version of the application upon bootstrap.
It is also possible to use a base Amazon AMI with Docker where the container image is pulled or built upon bootstrapping of the instance.

Pros

  • Not required to keep AMIs for each deployment.
  • Base AMI software patching can be done independently from the application.
  • Straightforward to configure the provisioning rules.
  • Build process and deployment is straightforward (Without the need for the AMI build).

Cons

  • For large applications, bootstrapping can take more time causing problems for spike loads (Taking too much time to provision and handle the excess load).
  • Supporting rollback would be difficult unless following an approach to keep track of bootstrap code with deployment version (e.g Store each deployment in S3 and bootstrap process will retrieve the relevant version).

However, using Docker can solve some of the problems of deployment versioning and rollback which requires to containerize the applications and have a unique docker build process.

Top comments (3)

Collapse
 
taragrg6 profile image
taragurung

Great! I would like to give you one simple situation and know how is it often solved.

Suppose I created one AMI with wordpress setup done. Now I can scale another out of it but the problem will be if there is changes in data in the first instance , the same data might not appear in newly created. It might be hectic to create new AMI for every data change or is there a way to make it automatically for each changes.

How is it often taken care of? Is EFS the only solutions? How experts do it?

Collapse
 
kasunmadurar profile image
Kasun Madura

In my experience we could use packer with ci-cd tool like concouse to setup proper build pipline and also easy to manage with dynamic configure or automation tool like ansible(servers less chef or puppet) will solve lot of problems in auto scaling process.

Collapse
 
ashanfernando profile image
Ashan Fernando • Edited

Thanks for the response Kasun. I hope it will be useful for the readers when selecting the right tools for CI/CD and configuration management.

However, there are lots of tools for automation both for CI/CD (e.g Jenkins, AWS Tools Stack which includes CodePipeline, CodeBuild, CodeDeploy, Circle CI & etc.) and configuration management tools such as Chef, Puppet (Also AWS provides the Managed Service OpsWorks that supports Chef and Puppet), Ansible, Terraform and the list goes on, including the ones you mentioned.

Based on my experience, most of these tools serve the purpose and has pros and cons over the other. The acceptance also changes with time. Therefore it is also subjective. For example, I used to use Jenkins and OpsWorks with Chef before but now moving to CodePipeline, CodeBuild, and CodeDeploy and Terraform.

However, the core strategies for efficient deployment is mostly (There are exceptions based on the nature of the applications) governed by the support from AWS. In this article, my focus was these core strategies which can be automated or easily managed using these tools.