EC2 is essentially a virtual server in the cloud. It can be available within minutes after setting it up. Compare that to how long it would take to provision and prepare a physical server in your own datacenter. Even ordering it and awaiting initial configuration and shipment can take weeks. General knowledge about EC2 is one of the key categories in the AWS Certified Cloud Practitioner exam.
EC2 is region-specific, so we should launch instances in a region that makes sense for latency and regulatory reasons. When we set up an EC2 instance, we get to choose from a large selection of pre-canned images across several different Linux offshoots and Microsoft Windows OSs. There are several instance types for these operating systems that we can select from. Here’s a great chart to help find the most appropriate instances to use.
Note: There’s a common mnemonic we can use to help us remember the different instance types, but that likely won’t appear on the AWS Certified Cloud Practitioner exam: FIGHT DR MCPXZ (Fight Dr. McPixie), although with a recent change, the newer mnemonic could be FIGHT DR MACPXZ, due to the A1 class that was added in 2018. Keep in mind that some exams may refer to the older mnemonic.
In general, these are the existing type categories:
- F – For FPGA (Field Programmable Gate Arrays) (F1 instances)
- I – For IOPS (Storage Optimized, backed by IOPS SSD EBS).
- G – For Graphics (Accelerated Computing)
- H – High Disk Throughput (I3 instances)
- T – Cheap general purpose, like T2 Micro
- D – Density (D2 instances)
- R – RAM (High Memory instances)
- M – Main choice for general-purpose apps (M class instances)
- A – ARM-based workloads (A1 instances)
- C – Compute (C class instances)
- P – Pics (graphics) (G class instances) (an alternative is now Amazon EC2 Elastic GPUs)
- X – Extreme Memory (X1 and X1e instances)
- Z – Extreme Memory and CPU (z1d instances)
Aside from these AWS-supplied instance images, we can also create instances from a saved image (AMI) that we created from previously configured instances, or from AMIs purchased from the AWS Marketplace. This is often used for Auto Scaling launch configurations and Elastic Load Balancing target groups. This will be covered in another article.
We should always design for failure. So, at minimum, we should create an EC2 instance in each availability zone in the region.
When we create an instance, we also need to create a Security Group to poke holes in the firewall for ports from specific IP address(es) or from anywhere: 0.0.0.0/0. Think of this as a virtual firewall at the instance level. By default, the SSH port (22) is opened up. But other common ports we may want to open up are:
- HTTP (80)
- HTTPS (443)
- RDP (3389)
If we want to have fine-grained (specific IP-level) access control to our EC2 instances, we’d apply network ACLs (NACLs) to our EC2s’ VPC and subnets.
When setting up an EC2 instance, we also have to configure the storage we want attached to our instance. We do that by specifying the Elastic Block Storage (EBS) type(s) to attach. These are virtual disks in the cloud, and are created in the same availability zone (AZ) as the EC2 instance. Each virtual storage device is auto-replicated:
SSD (Solid-State Drive)
- GP2 is a general purpose SSD, often used as the main root volume.
- I01 is provisioned IOPS SSD high-performance drives, which are best for high-performance databases.
HDD (Magnetic Drive)
- HDD drives cannot be boot volumes.
- ST1 is a throughput-optimized HDD, which is a low-cost volume for frequent, throughput-intensive workloads, such as database servers.
- SC1 is a “cold” HDD, which is the lowest cost option for less frequently accessed workloads, such as file servers.
- Magnetic is a previous generation EBS type, and is being phased out.
Once we get our EC2 instance(s) configured and started, we’ll often need direct access to the machines. The most common method is via SSH (port 22). Upon launch of an EC2 instance, we’re prompted to select or create a “key pair” (public/private key) that we’ll need to SSH into Linux instances and to obtain a password to RDP into Windows instances. This creates a private key (.PEM file) that can be used directly from a Linux-based OS (including MacOS) to SSH into the instance. To use this key file from Windows, we’d need to use a utility like PuTTY to convert the key file into a .PPK file and SSH into the instance.
From a Linux-based OS, after saving the .PEM file, we need to apply read-only rights to the file owner by running
chmod 400 keyname.pem.
From a local machine, we can connect via SSH by running
ssh firstname.lastname@example.org -i keyname.pem, where x.x.x.x is the IP address we can grab from the AWS console’s IPv4 Public IP field on the EC2 instance Description panel.
We also use this key pair to run AWS CLI commands. Although this is a topic for another article, please note that we need to store this key locally (in plain text) in the
~/.aws folder. If we want to run CLI commands from the EC2 instance itself, it is far more secure to apply IAM Roles to the instance instead. If anyone ever got access to the EC2 instance, they could grab complete control of the instance for good if the private key was available in the file system.
I cover EC2 pricing in much more detail in another article. In general, there are four main EC2 pricing models:
- On-Demand (low-cost and flexible)
- Reserved (steady-state, predictable usage)
- Dedicated (for regulatory requirements)
- Spot (flexible start and end times)
This may be the last article I can write before my first AWS certificate exam, so wish me luck 🙂