DEV Community

Cover image for Appropriate instance type depending on your workflow in AWS
Gabriela Menocal for AWS Community Builders

Posted on • Updated on

Appropriate instance type depending on your workflow in AWS

Creating and working with Cloud Providers is increasingly popular (almost a norm). Different workflows and applications can be running and be accessible everywhere without the need for physical infrastructure. This is true whether you are testing, doing migrations or creating new workflows.

This post aims to share some tips, ideas and suggestions on how to choose an appropriate virtual machine in AWS with an EC2 instance.

A few questions should be answered before landing on an instance type (these questions are a baseline, you can tailor them, or include more, based on the workflow you are working on):

  1. What is the main purpose of this workflow?
  2. Do I need to run this instance temporarily or permanently?
  3. What applications will be running in the instance?
  4. Is the traffic steady? Will it increase over time?
  5. How resilient do I need the solution to be?
  6. Will it be a centralized or distributed workflow?
  7. Do I have a limit in the budget?

In my experience, these questions are usually enough to choose an appropriate instance type (sometimes you even need only a subset), however, on occasion, more questions should be added as needed.

AWS offers an extended catalog of instance types which are classified based on the use case, for example, Compute Optimized or Memory Optimized. (AWS instance types). This classification allows us to easily decide which one needs to be used.

Some other sources of information can be found here:

  1. AWS Instances Explorer

  2. AWS Instances Documentation

instances-type

These links are a great resource to find and sort the characteristics and instance types based on categories, technical specification, additional capabilities among others.

When installing any software in a machine, you will need to check whether the machine satisfies the minimum system requirements.

Here are examples of hardware specifications for two different applications;

Table I: Application 1
application-1

Table II: Application 2
application-2

From the above images, you can see the minimum hardware requirements for the given applications, so you can “map” those requirements with your instances types:

AWS presents a matrix with different configuration parameters, per family, and per size. For example, the following image shows characteristics such as vCPU and GiB, among others. The vCPUs are the virtual CPUs (please refer to AWS vCPU). Long history short you can assume the following: 1xvCPU = 1xCore

instance-type-parameters

So, from Table 2: Application 2, the minimum instance types needed will be m5.2xlarge and up, because this instance has 8xvCPUs = 8xCores. required for the application.

I have learned that when something is listed with the words “Up to...”; in all likelihood you will work with a burstable instance, which means the specified bandwidth will be sporadic and not constant at any given time. Hence if you need to be constant and guaranteed, these instances would not be your best choice.

Regarding the physical processor speed, such as >= 2.4 GHz from from Table 2: Application 2, will require some knowledge about the relationship between Mbps and MHz, which is not part of the scope for this post, because this is comparing the two of them is a slightly more complex subject, however, you can take the following formula as a reference, which clearly states the correlation between them:

number of cycles per second * bits transferred per cycle = number of bits per second

This will give you the amount of data that can be processed based on the speed of the processor (measured in MHz). When moving to the cloud we will need to take in consideration two more things: IOPS and Throughput

IOPS is related to the number of read and write instructions/operations per second that can be processed
Throughput is the number of bits that can be read and written per second.

This means you will need to take a look at the EBS Bandwidth column and Network Bandwidth column, to make sure that you not only comply with the number of cores but also with the appropriate write and read speeds required for your application.

Also, the way AWS measures their EBS performance by IOPS, and also, as per AWS documentation “Amazon EBS allows you to create storage volumes and attach them to Amazon EC2 instances..” For more detailed information please refer to the following links:

You can check that AWS described and classified its volume in two categories, SSD and HDD and listed their capabilities. For the complete tables and up to date features please check AWS EBS Features

EBS-features3

EBS-features4

I highlighted the IOPS and Throughput as a reference. Also, please notice that depending on the instance type and family the features and characteristics might be slightly different, however make sure that you should be able to identify and locate what you are looking for while choosing the instance types.

Instance-types-parameters3

Instance-types-parameters4

And the last point I would like to mention is the Memory given in the GiB unit, which is equivalent to the physical memory. So from the Application 1’s table the 4GB RAM => 4GiB Memory described in AWS documentation.

Key Points:

  • Understand the system requirements of your application
  • Avoid using ‘burstable’ instances in case a required throughput is needed at any given time
  • Understand the right IOPS and Throughput required for your deployment.
  • Also, the good news whether you decide to change your instance type, AWS offers this flexibility as well, please refers to Change instances Types

Discussion (0)