DEV Community

Nicolas El Khoury for AWS Community Builders

Posted on

Different Ways of Deploying a Microservices Application on AWS

Introduction

Traditionally, applications were designed and implemented using a Monolithic architectural style, with which the application was developed and deployed as a single component, divided into multiple modules. Monolithic applications are very easy to develop and deploy.

However, such an architectural pattern becomes a burden once the application becomes too large:

  1. Difficult to manage and maintain, due to the large code.
  2. All of the application is built using one programming language, thus the system may suffer from bottlenecks when performing tasks not suitable for this specific language.
  3. Difficult to scale the application.
  4. Difficult to use container based technologies (Due to the large size of the application).

With the emergence of Cloud Computing, and the concept of the on-demand provisioning of resources, a more suitable architectural pattern was required. Microservices rapidly gained popularity, and became a widely used architectural pattern, especially for applications deployed on the cloud. Microservcies are an architectural pattern which divides an application into smaller, independent, loosely coupled services that may communicate with each other via multiple protocols (e.g., HTTP, sockets, events, etc). Microservices provide the following advantages:

  1. Easy to maintain (smaller code in each service). Highly scalable.
  2. Extremely suitable for container-based technologies. Complements cloud solutions.
  3. Fault tolerance: If one microservice fails, the rest of the system remains functional.

Microservices vs Monolithic

Truly, the Microservices Architecture is a very powerful architectural pattern that goes hand in hand with the services provided by the cloud. However, a well designed system depends on two factors. A robust design of the software, and of the underlying infrastructure. There exists multiple articles, tutorials, courses, that explain and promote the design, and implementation of Microservices.

Microservices Example Project

The NK Microservices project is a sample project built using the Microservices approach. This project will be used in this article to better illustrate the differences between deployment modes.

This project is made of the following components:

  1. Gateway Microservice: A REST API Microservice built using SailsJS, and serves as a Gateway, and request router.

  2. Backend Microservice: A REST API Microservice built using SailsJS, and serves as the first, out of many Microservices which can be incorporated and integrated with the aforementioned Gateway Service.

  3. Redis Database: An open source, in-memory data store, used for caching purposes, and for storing other ephemeral pieces of information such as JWT tokens.

  4. Arango Database: A multi-model database used for storing persistent information.

The project requires all of the aforementioned components to be set up in order to function properly.

Deployment Modes

Deploying applications on robust infrastructure is a key operation to the success of the product. Evidently, each application serves a specific purpose, and is designed uniquely using distinct technologies. In this regard, the underlying infrastructure for each application may differ based on the application, business, regulatory, etc, needs.

Usually, when deploying software, several considerations must be taken into account, some of which include:

  • Security: The system must always be secure from all sorts of unwanted access.

  • Scalability: The ability to scale up/down resources based on demand.

  • Availability: The system must be able to sustain failure, and avoid single points of failure.

  • System Observability: Tools that increase the system visibility (monitoring, logging, tracing, etc).

However, in general, these different deployment modes can be categorized as follows:

  • Single Server Deployment.
  • Multi Server Deployment.
  • Deployment using Container Orchestration Tools.

The rest of this document explains the details of each deployment type, its advantages, and disadvantages.

Single Server Deployment

Single Server Deployment

As the title states, in this mode, a single server is supposed to host all the different components. In this case, All the components of the NK Microservices project (Arango Database, Redis, Backend, and API Gateway) will be deployed on configured on the same server. A similar deployment can be found here. Typically, the inter-service communication between the components can be done using the local machine's IP. Despite this advantage, this mode of deployment is highly unadvised, except for local and temporary testing. Below is a list of disadvantages, explaining why it is advised to never proceed with Single Server Deployments, especially for production workloads:

  • Time consuming: Properly installing and configuring each component may prove to be time consuming, especially as the number of components grows in sizes. The NK microservices project is a relatively small project of 4 components, but imagine larger ones with 20+ components. In this case, such a deployment is definitely inefficient.

  • Non-Scalable: Each component can be installed and configured to run as a standalone process. Evidently, on a single server, clustering databases and spinning multiple replicas of the same process is not only worthless (The server represents a SOF), but also will contribute to the server's performance degradation through consuming more resources.

  • Huge Downtime: Any configuration, maintenance work, or the slightest error may put the whole server (and application) down, until the server and all of its components are restored.

  • Single Point of Failure: Being a single server, no matter what disaster recovery mechanisms, security policies, or auto-repair mechanisms, if the server goes down, the whole application goes down with it.

  • May mess up the host machine: The different application components are using the same servers, sharing the same resources, and outputting content on the same server. As the number of applications on the server grows, along with the demand of each one, the server is at risk of infection, where one component may consume the resources of the others, thus blocking them from operating safely.

In summary, single server deployments are only advised for personal use, and short term testing of small software applications.

Multi Server Deployment

Multi Server Deployment

A better approach would be to divide the project's components across multiple servers. Monolithic applications are usually developed using MVC, a three-tier architectural style (Frontend, Backend, and Database). A proper and intuitive deployment approach would be to deploy each layer on a separate server. In the case of microservices, a similar approach can be taken, by dedicated a small server to each component of the application. Such a deployment mode, while not the best, is definitely a more convenient approach:

  • Separation of concerns: Processes are no longer sharing resources and risking each other's proper operation.

  • Isolation: Each component may operate on its own, and use dedicated resources. A proper design of the application may allow partial system operation in case of partial downtimes. For instance, if the Redis database crashes, the rest of the system should be fully operational.

  • Scalability: Database clustering and service replication are now possible by replicating the servers of each application independently from the other.

Evidently, multi server deployments are definitely a better alternative to their single server counterparts. However, this approach tends to become cumbersome at scale, especially when it comes to scalability, and managing the lifecycle of each component, and each replica of that component properly. Handling a system of 4 components, and a couple replicas per service may be easy. However, consider a system of 30 components and 5 replicas per component. Properly handling such a system may become cumbersome.

Deployment using Container Orchestration Tools

Image description

Container Orchestration Tools provide a framework for managing microservices at scale. Such tools possess amazing capabilities and features to manage the lifecycle of distributed application components from a centralized command server, and can be used for:

  • Provisioning, configuration, resource allocation and deployment of services.
  • Container availability
  • Scale in/out of resources
  • Load balancing and traffic distribution and routing
  • Container health monitoring
  • Managing inter-process communication.

There exists several mature tools that are widely used in the market, namely Kubernetes, Docker Swarm, Apache Mesos, etc. Describing these tools is out of scope of this article.

Image description

The diagrams above clearly point how a simple Elastic Kubernetes Service (EKS) on AWS permits the provisioning of several servers, and deploying the NK microservices project in a production ready infrastructure composed of replicated servers and services. Such a deployment has several advantages:

  • Easy scalability mechanisms for servers and containers.
  • Improved governance and security controls.
  • Better visibility on the system.
  • Container health monitoring.
  • Optimal resource allocation.
  • Management of container lifecycle.
  • System Visibility.
  • Cost opimization.

Conclusion

In brief, this article summarized the different modes of deployment available for Microservices on AWS. Single server deployments are usually not advised, except for personal use and local temporary testing. Multi server deployments represent a better alternative, especially at small scale. However such a deployment may become cumbersome at scale, and must be replaced by a more convenient mode, such as using Container Orchestration Tools (At the expense of complexity).

Top comments (0)