DEV Community

Nicolas El Khoury for AWS Community Builders

Posted on • Edited on

Proposed Infrastructure Setup on AWS for a Microservices Architecture (2)

Chapter 2: Overview of the Infrastructure and Components.

Chapter 1 of this series explained the advantages and disadvantages of a Microservices architecture, in addition to the design considerations required to implement an infrastructure that is robust and adequate enough to host such types of architectures.

This chapter provides an overview of the proposed infrastructure, and explains the different components used, along with the advantages it provides.

Proposed AWS Infrastructure

  • Virtual Private cluster (VPC): is a private network, within the public cloud, that is logically isolated (hidden) from other virtual networks. Each VPC may contain one or more subnets (Logical divisions of the VPC) attached to it. There exists two types of subnets: public subnets, in which resources are exposed to the internet, and private subnets, which are completely isolated from the internet.

  • Amazon Application load Balancer (ALB): An Application load balancer serves as a point of contact for clients. The load balancer evaluates, based on a set of predefined rules, each request that it receives, and redirects it to the appropriate target group. Moreover, the load balancer balances the load among the targets registered with a target group. A load balancer canbe interet-facing (Can be accessed from the internet), or internal (cannot be accessed from the internet). AWS provides three types of load balancers: 1) Application Load Balancer, Network Load Balancer, and Classic Load Balancer.

  • Amazon Cloudwatch: AWS’ monitoring tool for all the resources and applications on AWS. Collects and displays different metrics of resources deployed on AWS (e.g.. CPU Utilization, Memory Consumption, Dis Read/Write, Throughput, 5XX, 4XX, 3XX, 2XX, etc). CloudWatch alarms can be set on metrics in order to generate notifications (e.g., Send an alarm email), or trigger actions automatically (e.g., Autoscaling). Consider the following alarm: When the CPU Utilization of instance A averages higher than 65% for three minutes (Metric Threshold) Send an email to a set of recipients (Notification) and create a new replica of instance A (Scaling Action).

  • Amazon S3: An AWS storage service to store and retrieve objects.

  • Amazon Cloudfront: A Content Delivery Network (CDN) service that enhances the performance of content delivery (e.g., data, video, images, etc) to the end user through a network of edge locations. AWS Cloudfront can be attached to an Amazon S3 bucket, or any server that hosts data, caches the objects stored on these servers, and serves them to the users upon requests.

  • Lambda Functions: A type of serverless compute functions, which allows users to upload their code without having to manage servers. AWS handles all the provisioning of underlying machines. Lambda functions are triggered by events configured, namely, An object put on S3, an object sent to the SQS, periodically, etc.

The diagram above depicts an infrastructure, in which multiple resources are deployed. Aside from S3, Cloudfront, and Cloudwatch, all the resources are created and deployed inside the VPC. More importantly, all of these resources are inside private subnets, as can be seen later in this article. Resources spawned in private subnets only possess private IPs, and therefore cannot be accessed directly from outside the VPC. Such a setup maximizes the security. In fact, a database launched in a public subnet, and protected by a password, no matter how strong it is, is at a high risk of being breached directly (Simple brute force attack). However, a database launched in the private subnet is practically nonexistent for anyone outside the VPC. Even if not secured with a password, the database is only accessible to users inside the private network.
The communication between the application components, such as microservices and databases passes through a load balancer. In more details, each microservice, database, or any other component is attached as a target group to a load balancer. The components that are given access to from the internet are attached to an internet-facing load balancer, whereas the backend system components are attached to an internal load balancer. This approach maximizes the availability, load balancing, and security of the system. To better explain the aforementioned, consider the following example:

Assume an application composed from a front-end microservice, an api gateway microservice, a backend-end microservice, and a database. Typically, the frontend, and api gateway services should be accessed from the internet. Therefore, they should be attached as two target groups to the public facing load balancer. On the other hand, the backend service, and the database must never be accessed from the outside world, thus attached to the internal load balancer. Consider a user accessing the application, and requesting a list of all the products available, below is the flow of requests that will traverse the network:

  1. Request from the user to the internet-facing load balancer.
  2. The load balancer routes the request to the frontend application to load the page in the user’s browser.
  3. The front-end application returns a response to the load balancer with the page to be loaded.
  4. The load balancer returns the response back to the user.

Now that the page is loaded on the user’s device, another request should be made by the page asking to fetch the available products.

  1. Request from the user to the internet-facing load balancer.
  2. The load balancer routes the request to the api gateway.
  3. The api gateway routes the request, through the internal load balancer, to the backend service that is supposed to fetch the products from the database.
  4. The backend service queries, through the internal load balancer, the products from the database.
  5. The response returns back to the user following the same route taken by the request.

If the page loaded contains files available in an S3 bucket, that is synced with AWS Cloudfront, the following steps are performed:

  1. Request from the user to the Cloudfront service requesting a file.
  2. Cloudfront checks if it possesses the file in one of the edge locations. If found, the file is directly served back to the user.
  3. If missing, Cloudfront fetches the file from S3, returns it back to the user, and caches it.

Attaching the services as target groups to the load balancers provide multiple advantages (Which will be explored in details in the following chapter), namely security, through only allowing requests that match certain criteria pass, and load balancing, by balancing the requests through all the replicas registered of the same service.

In summary, this article described a brief overview of the infrastructure proposed, how it operates, and the advantages it provides. The next chapter will describe in details how microservices should be deployed in a secure, available, and scalable fashion, in addition to setting autoscaling policies and alarms.

List of articles

Top comments (0)