DEV Community

Nicolas El Khoury for AWS Community Builders

Posted on • Updated on

Proposed Infrastructure Setup on AWS for a Microservices Architecture (3)

Chapter 3: Deployment Strategy for Microservices.

Chapter 2 provided an overview of the proposed infrastructure, and explained the different components used, along with its advantages. However, the aforementioned infrastructure is only as robust as the environment hosting the microservices. In fact, an improper deployment of microservices may lead to numerous problems, namely, bottlenecks, single point of failure, increased downtimes, and many more.

This chapter promotes one way of deploying microservices, along with some best practices, in order to achieve security, scalability, and availability.

AWS Region

To further illustrate the proposed solution, the diagram above represents a Virtual Private Cluster (VPC) located in the region Ireland (eu-west-1) (The details of creating a VPC and its underlying components are out of the scope of this artible). A VPC created in a region may support one or more Availability Zones (AZ). Each Availability Zone represents a distinct data center in the same region. While regions are isolated from one another, Availability Zones within a single region are connected to each other through low-latency links. For simplicity reasons, assume that this region is comprised of two availability zones (eu-west-1a, and eu-west-1b).

In each AZ, a public subnet and a private subnet are created. In chapter 2, we clearly stated that all of the microservices and other backend components are to be created in private subnets, and never in public ones, even components or services that require to be accessed by the internet (e.g., frontend applications, API gateways, etc). Even though resources in private subnets by default cannot be accessed by the internet, attaching them to an internet-facing load balancer is enough to alleviate this problem. Therefore, microservices that must be accessed from users outside the VPC must be attached to an internet-facing load balancer, and the other ones must be attached to an internal load balancer. Evidently, all microservices communicate with one another through the load balancers, and never through IPs. In fact, such a communication through load balancers not only ensures a balanced load across multiple replicas of one service, but also has multiple other advantages that will be discussed later in this article.

One must always take advantage of the existence of multiple AZs in a region. Thus, when deploying a microservice, it is always advisable to deploy multiple replicas of it, and span these replicas across multiple AZs. For instance, when deploying one microservice in the VPC above, a good approach would be to deploy two replicas of this service, one in each private subnet. In case of any failure, be it on the microservice, instance, or AZ level, the application still has another running instance ready to serve the requests, until the failing component is resolved.

Assume the following three deployment scenarios to illustrate the importance of what has been said.

  1. Microservice A is deployed as 1 replica in private subnet (a). A failure on the microservice level is enough to create an unwanted downtime. In fact, should this microservice fail, there exists no other replica to serve the requests, until this microservice comes back to life. (Failed approach)

  2. Microservice A is deployed as 2 replicas, in private subnet (b). While this deployment ensured more than one replica for the service, both replicas are located in the same subnet, and in the same availability zone. In this case, the application is protected from a failure on the microservice level, since another replica is ready to serve the requests. However, a failure on the AZ level is enough to bring the service down. (Failed approach)

  3. Microservice A is deployed as 2 replicas, one in private subnet (a), and another one in private subnet (b). With such a deployment, the only way to suffer from downtime is to bring the whole VPC down, which is something very difficult to happen. In fact, each replica of the service is located in different datacenter. Thus, a failure on the microservice level, and even on the datacenter level is mitigated. (Successful approach)

The three scenarios above illustrate the importance of replicating and spreading microservices as much as possible in order to provide reliability, and fault tolerance. What follows explains the importance of attaching all the replicas of every microservice into a load balancer. Assume a service of two replicas. Attaching the replicas as a target group to a load balancer provides the following advantages:

  1. Load balancing across all replicas.

  2. Detecting and reporting failed replicas: The load balancer performs regular health check on each replica. In case of a failure in one of the replicas, the load balancer stops forwarding requests to it. Alarms could be set using Cloudwatch in order to report such incidents.

  3. Ability to easily scale replicas: AWS provides multiple mechanisms to scale in and scale out a service, and easily integrates the added/removed instances from the target group.

  4. Service Discovery: The load balancer alleviates the need for a service discovery tool, through attaching each distinct service to the load balancer as a target group. The load balancer, namely the Application Load Balancer (ALB) supports multiple routing rules such as host-based routing, and path-based routing.

In brief, this article explained the best practices when choosing a mode of deployment for microservices. The proposed solution maximizes security, availability, and reliability of the microservices. The next chapter will describe the different technologies that can be used on which microservices can be hosted.

List of articles

Discussion (0)