DEV Community

Cover image for All You Need To Know About AWS Compute Services
Mohammad Quanit for AWS Community Builders

Posted on • Updated on

All You Need To Know About AWS Compute Services

Before moving to understand AWS Compute services, we must have knowledge of what compute means in general. Compute resources can be thought of as processing power of any application or system to carry out computational tasks in a series of instructions.
These resources cover a range of different services and features.
In simple terms, Compute in cloud computing is used to reference processing power, networking and memory. Either its some physical servers used in some On-Prem Data-center or Virtual server provided by any Cloud Provider, containers running in Virtual machines or any code running in serverless model considered to be a compute resource.

Amazon Web Services (AWS) also provides some compute services for managing workloads which comprise of hundreds of servers or instances to be used for months and years.

Here are the compute services that AWS provides for different kind of use-cases, which we discuss these in details.

  1. Amazon Elastic Compute Cloud (EC2)
  2. Amazon Elastic Container Registry (ECR)
  3. Amazon Elastic Container Service (ECS)
  4. Amazon Elastic Kubernetes Service (EKS)
  5. AWS Elastic Beanstalk (EBS)
  6. AWS Lambda
  7. Amazon Lightsail
  8. AWS Batch

AWS EC2

EC2 - Elastic Compute Cloud is one of the most popular and mostly used compute services that AWS provides for doing computations and processing. EC2 allows you to deploy virtual servers within your AWS environment. You can think of it as a virtual machine deployed on AWS physical data-centers irrespective of your local environment.

EC2 service can be broken down into following components.

Amazon Machine Images (AMI)

AMI's are Images or templates for preconfigured EC2 instances allow you to quickly launch Ec2 servers based on
configuration

Instance Types

Once you select AMI's, you need to select what type of EC2 instance type you are required to use. AWS provides tons of options divided into Instance type families that offers distinct performance benefits. You can read further about these instances in details from here.

Instance Purchasing options

AWS also provides instance purchasing options for instances through a variety of different payment plans. They have been designed to help you save cost by selecting the most appropriate option for your deployment. You can read further about these instances in details from here.

User Data

During the launch of EC2 instance, there is an option available for which allows you to enter commands that will during the first boot cycle of the instance. This is a great way to automatically perform functions you want to to execute at your instance startup.

Storage

As a part of launching ec2 instance, you're asked to select configuration for storage. As storage is a crucial part for any server we have to provide some number in GB's for persisting the ec2 data.

Security

Security is fundamental part for any AWS deployment services. During launch of EC2, you're asked to create or attach a security group with your instance. A security group is essentially instance level firewall for managing inbound and outbound traffic for your EC2.

AWS ECS

ECS - Elastic Container Service allows you to run container based application across a cluster of EC2 instances without requiring you to manage a complex and administratively heavy cluster management system. You can deploy, manage and scale containerized applications by using ECS.
You don't have to install softwares for managing and monitoring these clusters. AWS manage these itself as it is AWS managed service. AWS ECS provides 2 ways to launch an ECS cluster.

  1. Fargate Launch
  2. EC2 Launch

Fargate Launch
It requires you to specify CPU and memory required, define networking and IAM policies. And you need your application into containers.

EC2 Launch
It requires you to responsible for patching and scaling your instances, and you can specify which instance types you used, and how many containers should be in a cluster.

AWS ECR

ECR - Elastic Container Registry links closely to the last discussed service i.e ECS. It provides secure location to store and manage your docker images that can be deployed across your applications. ECR is fully managed service by AWS means you don't have to create or manage any infrastructure to allow you to create this registry. You can think of it as a dockerhub for AWS.

AWS EKS

EKS - Elastic Kubernetes Service allows you to run and manage your infrastructure in kubernetes environment. Kubernetes is an open-source tool to manage or orchestrate your containers in form of worker nodes designed to automate, deploying, scaling, and operating containerized applications. It is designed to grow from tens, thousands, or even millions of containers. There are 2 main components kubernetes control plane and worker nodes manages the overall flow for EKS.

Kubernetes Control Plane
There are number of different components that make up the control plane and these include a number of different APIs. It has a job to manage and decide the clusters and responsible to communication for your nodes.

Worker Nodes
Kubernetes Clusters are composed of nodes. A node is a worker machine in Kubernetes and runs as an on-demand EC2 instance and includes software to run containers managed by the Kubernetes control plane.

AWS Elastic Beanstalk

AWS Elastic Beanstalk is a fully managed AWS service that allows you to upload your code of your web application and automatically deploys and provision the required resources required to make your application functional. It is a AWS managed service but it also provides you options for managing resources such as ec2 instances, auto-scaling groups, load-balancers, software support, databases etc for providing you to take complete control of it. For every application you need to create environment for it, which is responsible to manage those resources in form of Cloudformation stack. Yes, It uses Cloudformation for creating and provisioning your environment resources. Elastic-BeanStalk is brilliant service if you just want to upload your code and show it as a prototype of any software.

AWS Lambda

Amazon Lambda is a serverless compute service which has been designed to allow you to run your code (function) without having to manage and provision the EC2 servers. Serverless means that you do not have to manage your compute resources by yourself instead AWS will do the heavy work for your application. Obviously it uses servers under the hood for doing computing operations so its serverless for users perspective. If you don't have to spend time operating, managing, patching, and securing an EC2 instance, then you have more time to focus on the code of your application and its business logic, while at the same time, optimizing costs. With AWS Lambda, you only ever have to pay for the compute power when Lambda is in use via Lambda functions.

AWS Lightsail

Amazon Lightsail is another compute service that resembles with EC2 service. Amazon Lightsail is essentially a virtual private server VPS backed by AWS infrastructure much like as EC2 but with less configuration options. Amazon Lightsail is designed for small scale business or for single users. With its simplicity and small-scale use, it's commonly used to host simple websites, small applications, and blogs. You can run multiple Lightsail instances together, allowing them to communicate. The applications can deployed quickly and cost effectively in just a few clicks.

AWS Batch

Amazon Batch which is used to manage and run batch computing workloads within AWS. Batch computing is primarily used in specialist use cases, which require a vast amount of computer power across a cluster of compute resources to complete batch processing, executing a series of jobs or tasks.

Jobs
A Job is classed as a unit of work that is to be run by AWS Batch. Job Definitions. These define specific parameters for the Jobs themselves.
Job Queues.
These are Jobs that are scheduled and placed into a Job Queue until they run. Job Scheduling. The Job Scheduler takes care of when a job should be run, and from which Compute Environment. And Compute Environments. These are the environments containing the compute resources to carry out the Job.

If you liked this article, then don't forget to follow me on:

twitter.com/mquanit
dev.to/mquanit
linkedin.com/in/mohammad-quanit

Oldest comments (0)