DEV Community

Cover image for AWS Compute

AWS Compute

There are 4 foundation services of AWS:

  1. Compute
  2. Storage
  3. Database
  4. Network

We'll cover compute services. 

What is compute in AWS?

AWS compute refers to the ability to run applications and services on Amazon Web Services (AWS). It can be considered as the brains and processing power required by applications such as CPUs and RAM. 

Within AWS there a more than a few compute sources you can use. AWS offers a Cloud Compute Index, which can be found here. You can see different scenarios of where you can use different compute sources. 

EC2 - Elastic Compute Cloud

EC2 stands for Amazon Elastic Compute Cloud. It is a web service that provides scalable computing capacity in the Amazon Web Services (AWS) Cloud. Using EC2 eliminates the need to invest in hardware up front, so you can develop and deploy applications faster.

EC2 provides a variety of instance types, each with different CPU, memory, and storage configurations. You can choose the instance type that best meets the needs of your application. EC2 also provides a variety of operating systems, so you can choose the one that is right for you.

Once you have chosen an instance type and operating system, you can launch an EC2 instance. When you launch an instance, you will be given a public IP address that you can use to connect to your instance. You can also create a security group that controls who can access your instance.

EC2 is a powerful tool that can be used to run a variety of applications. It is a great option for businesses of all sizes, from small startups to large enterprises.

Here are some of the benefits of using EC2:
Scalability: EC2 is highly scalable, so you can easily add or remove resources as needed.
Reliability: EC2 is highly reliable, with a 99.99% uptime SLA.
Security: EC2 is highly secure, with a variety of features to protect your data.
Cost-effectiveness: EC2 is cost-effective, with a variety of pricing options to fit your budget.

If you're looking for a scalable, reliable, and secure way to run your applications, then EC2 is a great option.

The EC2 service can be broken down into the following components:

  • Amazon Machine Images
  • Instance Types
  • Instance Purchasing Options
  • Tenancy
  • User Data
  • Storage Options
  • Security

Amazon Machine Images

An AMI, or Amazon Machine Image, is a template that you can use to launch an EC2 instance. AMIs include the operating system, software, and configuration settings that you want to use on your instance. Basically, it is pre-configured EC2 instances. This prevents you to install an operating system or any other common applications that might need to install on EC2 instance.

Let's say you need to create a Linux server on AWS. You can start by selecting an AWS AMI, which is a template that includes the operating system, software, and configuration settings that you want to use on your server. Once the server is up and running, you can install your own custom applications and make specific configuration changes.

If you need to create another server with the same configuration, you can either go through the same process of selecting an AMI and installing your applications, or you can create a custom AMI from the first server. To create a custom AMI, you simply need to take a snapshot of the first server and then create an AMI from the snapshot.

Once you have created a custom AMI, you can use it to launch new servers with the same configuration. This can save you a lot of time and effort, especially if you need to create a large number of servers.

Creating custom AMIs is also a good way to keep your servers consistent. If you make changes to one server, you can simply create a new AMI and then use that AMI to update all of your other servers. This can help to ensure that all of your servers are running the same software and configuration settings.

Overall, creating custom AMIs is a great way to save time and effort when managing your servers on AWS. It is also a good way to keep your servers consistent.

Intant Types

EC2 instance types are the different types of virtual machines (VMs) that you can launch on Amazon Elastic Compute Cloud (EC2). Instance types are defined by their CPU, memory, and storage capacity.

There are many different instance types available, each with its own strengths and weaknesses. When choosing an instance type, you need to consider the needs of your application.
Here are some factors to consider when choosing an instance type:

CPU: The CPU is the central processing unit of a computer. It is responsible for performing the calculations that are required to run your application. The number of CPU cores and the clock speed of the CPU are important factors to consider when choosing an instance type.

Memory: Memory is used to store data that is currently being used by your application. The amount of memory that you need will depend on the size of your application and the amount of data that it needs to store.

Storage: Storage is used to store data that is not currently being used by your application. The type of storage that you need will depend on the type of data that you need to store and the amount of data that you need to store.

Once you have considered the needs of your application, you can choose an instance type that meets your requirements.

Here are some examples of instance types:
General purpose instance types: These instance types are a good choice for a variety of workloads, including web applications, databases, and development environments.
Compute-optimized instance types: These instance types are a good choice for workloads that require a lot of CPU power, such as high-performance computing (HPC) and machine learning (ML).
Memory-optimized instance types: These instance types are a good choice for workloads that require a lot of memory, such as in-memory databases and big data analytics.
Storage-optimized instance types: These instance types are a good choice for workloads that require a lot of storage, such as media streaming and content delivery networks (CDNs).

EC2 instance types provide you with a wide range of options for running your applications. By choosing the right instance type, you can ensure that your application has the resources that it needs to perform at its best.

User Data

EC2 user data is a set of commands or scripts that are executed when an EC2 instance is launched. User data can be used to install software, configure the instance, or perform other tasks.

User data is passed to the instance in a Base64-encoded string. The maximum size of user data is 16KB.

User data is a powerful tool that can be used to automate the configuration of EC2 instances. It can be used to:

  • Install software
  • Configure the instance
  • Perform other tasks

User data is a valuable resource for EC2 users. It can save time and effort by automating the configuration of instances.

Here are some examples of how user data can be used:

  • To install software, you can use user data to run a script that installs the software.
  • To configure the instance, you can use user data to set environment variables or create users.
  • To perform other tasks, you can use user data to run any command or script.

User data is a powerful tool that can be used to automate the configuration of EC2 instances. It can save time and effort by automating the configuration of instances.

Storage Options

EC2 provides a variety of storage options, each with its own strengths and weaknesses. When choosing a storage option, you need to consider the needs of your application.

Here are some factors to consider when choosing a storage option:
Cost: Storage is a recurring cost, so you need to choose an option that fits your budget.
Performance: The performance of your storage will affect the performance of your application.
Durability: Your data needs to be durable, so you need to choose a storage option that is reliable.
Scalability: Your storage needs to be scalable, so you need to choose an option that can grow as your application grows.

Once you have considered the needs of your application, you can choose a storage option that meets your requirements.

Here are some examples of EC2 storage options:
Amazon Elastic Block Store (EBS): EBS is a block storage service that provides persistent storage for EC2 instances. EBS volumes are attached to instances and can be used to store data that needs to be preserved even if the instance is stopped or terminated.
Amazon Elastic File System (EFS): EFS is a file storage service that provides a shared file system that can be mounted by multiple EC2 instances. EFS is a good option for storing data that needs to be shared between instances, such as web application code or media files.
Amazon Simple Storage Service (S3): S3 is an object storage service that provides a simple way to store and retrieve data. S3 is a good option for storing data that does not need to be persistent, such as media files or backups.
Amazon Elastic Cache (ElastiCache): ElastiCache is a in-memory data store service that provides high-performance, scalable, and durable in-memory data caching for demanding applications. ElastiCache is a good option for storing data that needs to be accessed quickly, such as database caches or session stores.

EC2 storage options provide you with a wide range of options for storing your data. By choosing the right storage option, you can ensure that your data is stored in a way that meets the needs of your application.

Security

When creating an EC2 instance, you will be asked to select a security group. A security group is a virtual firewall that controls inbound and outbound traffic to your instance. You can specify the source ports and protocols for both inbound and outbound traffic. Your instances are then associated with this security group.

A key pair is a pair of public and private keys. The public key is stored on AWS, and the private key is stored on your computer. When you create an EC2 instance, you will be prompted to create a key pair or select an existing one. The private key is used to connect to your instance.

When you create a key pair, you will be given the opportunity to download the private key. It is important to keep this file safe. If the private key is lost or compromised, anyone could connect to your instance.

Once you have created an EC2 instance and connected to it, you can set up additional access controls. For example, you can create local user accounts or use Microsoft Active Directory.

It is important to keep your EC2 instances secure. You should regularly install the latest security updates and patches.

You should also monitor your instances for suspicious activity.

Here are some additional tips for securing your EC2 instances:

  • Use strong passwords and security keys.
  • Keep your software up to date.
  • Use security groups to control access to your instances.
  • Use network access control lists (ACLs) to control access to your subnets.
  • Enable encryption for your data.
  • Monitor your instances for suspicious activity.

By following these security best practices, you can help to keep your EC2 instances secure.

ECS - Elastic Container Service

AWS ECS (Amazon Elastic Container Service) is a fully managed container orchestration service that makes it easy to deploy, manage, and scale containerized applications. ECS provides a variety of features that make it a powerful tool for container orchestration, including:
Scalability: ECS can scale your applications up or down automatically based on demand.
High availability: ECS can automatically distribute your applications across multiple availability zones for high availability.
Security: ECS provides a variety of security features to help you protect your applications.
Cost-effectiveness: ECS is a cost-effective way to deploy and manage containerized applications.

ECS is a good choice for a variety of applications, including:

  • Web applications: ECS can be used to deploy and scale web applications.
  • Microservices: ECS can be used to deploy and scale microservices applications.
  • Batch processing: ECS can be used to run batch processing jobs.
  • Data processing: ECS can be used to process large amounts of data.

AWS Fargate

AWS Fargate is a serverless compute engine for containers that lets you focus on building applications without managing servers. With Fargate, you don't have to provision, configure, or scale groups of virtual machines on your own to run containers. You also don't need to choose server types, decide when to scale your node groups, or optimize cluster packing.

AWS Lambda

AWS Lambda is a serverless computing service that runs your code in response to events, such as HTTP requests, changes to data in Amazon S3 buckets, or messages from Amazon Kinesis streams. Lambda takes care of all the administration of the underlying compute resources, so you can focus on writing your code and not worry about managing servers.

Components of AWS Lambda:
Lambda Functions: At the heart of AWS Lambda are Lambda functions, which are the units of code that are executed in response to events or triggers. These functions can be written in various programming languages, including Python, Node.js, Java, C#, Ruby, and Go, providing developers with flexibility and choice. Lambda functions can be created and managed through the AWS Management Console, AWS CLI, or using SDKs.
Event Sources: Event sources define the triggers that invoke Lambda functions. AWS Lambda integrates seamlessly with numerous AWS services, including Amazon S3, DynamoDB, API Gateway, CloudWatch Events, and more. These event sources generate events that can be used to trigger the execution of Lambda functions. For example, a new file upload to an S3 bucket or a database update in DynamoDB can trigger a Lambda function.
Runtime Environment: The runtime environment is the execution environment for Lambda functions. When a Lambda function is triggered, AWS Lambda provisions a runtime environment for that function, including an appropriate execution environment and dependencies based on the selected programming language. The runtime environment is managed by AWS, allowing developers to focus solely on their code logic.
Triggers: Triggers are the events or actions that initiate the execution of Lambda functions. AWS Lambda integrates seamlessly with a variety of triggers, allowing developers to build event-driven applications.

Let's explore some common triggers:

Event Sources: AWS Lambda can be triggered by events generated from various AWS services such as Amazon S3, DynamoDB, AWS Step Functions, Amazon Kinesis, and more. For example, an uploaded file to an S3 bucket or a database update can serve as triggers for Lambda functions.
API Gateway: Lambda functions can be invoked through API Gateway, allowing developers to build RESTful APIs without managing servers. API Gateway acts as a bridge between incoming HTTP requests and Lambda functions, enabling the creation of powerful and scalable APIs.
CloudWatch Events: CloudWatch Events can trigger Lambda functions based on events occurring within the AWS ecosystem. For instance, you can schedule a Lambda function to execute at specific intervals using CloudWatch Events' cron expressions.
Custom Triggers: AWS Lambda also supports custom triggers through the use of AWS SDKs and other services. This flexibility enables developers to create their own event sources and trigger Lambda functions programmatically.

Downstream Resources: Downstream resources refer to the services or resources that Lambda functions interact with during execution. AWS Lambda integrates seamlessly with a wide range of AWS services, allowing developers to leverage their capabilities within Lambda functions.

Here are a few examples of downstream resources:
Amazon S3: Lambda functions can read from or write to Amazon S3 buckets, enabling seamless integration with object storage. This is particularly useful for processing and manipulating files as part of a serverless workflow.
DynamoDB: AWS Lambda can interact with DynamoDB, Amazon's managed NoSQL database service. Lambda functions can retrieve, update, or insert data into DynamoDB tables, facilitating real-time data processing and storage.
Amazon RDS: Lambda functions can connect to Amazon RDS instances, allowing for database operations on relational database systems such as MySQL, PostgreSQL, and Oracle.
AWS Step Functions: AWS Lambda integrates with AWS Step Functions, a serverless workflow service. Step Functions allow developers to create and coordinate complex workflows by orchestrating multiple Lambda functions and other AWS services.
Third-Party Services: Lambda functions can communicate with external APIs and services using HTTP requests or SDKs. This allows integration with third-party services, enabling a wide range of possibilities for data processing, notifications, and more.

Log Streams: Log streams in AWS Lambda represent a sequence of log events generated by Lambda functions. Each Lambda function has its own log stream, which captures the output and diagnostic information during the execution of the function. These log streams play a crucial role in monitoring the behavior of Lambda functions, identifying issues, and optimizing application performance.

In conclusion, AWS offers a wide range of compute services that can meet the needs of virtually any workload, from simple web applications to complex high-performance computing workloads. The various compute services offered by AWS are designed to provide scalability, flexibility, and cost-effectiveness, making it easy for businesses and developers to deploy and manage their applications and workloads. Additionally, AWS compute services can be easily integrated with other AWS services, allowing users to take advantage of the full power of the AWS ecosystem.

Whether you need to run a simple website or a complex scientific simulation, AWS has the compute services you need to get the job done. With features like auto-scaling, load balancing, and serverless computing, AWS makes it easy to deploy and manage your applications and workloads, without having to worry about the underlying infrastructure. As such, AWS compute services remain a top choice for businesses and developers looking to optimize their computing resources, reduce costs, and improve scalability and flexibility.

I used this source for this article:

Fundamentals of AWS - Cloud Academy

This learning path will introduce you to some core cloud principles of AWS Fundamentals: Distributed storage, concurrent computing, redundancy, and security.

favicon cloudacademy.com

Top comments (0)