DEV Community

Cover image for Using Amazon AWS: choosing the best cloud services
Amanda Fawcett for Educative

Posted on • Originally published at educative.io

Using Amazon AWS: choosing the best cloud services

AWS (Amazon Web Services) is one of the most popular Cloud Computing platforms. AWS has core services like compute, storage, and networking, offering over 175 services. Companies of all types and sizes use AWS to cut costs, speed up innovation, and jumpstart development. With a platform that offers so many features and functionalities, many teams are unsure where to start with AWS. Which tools are worth using? Which will just slow you down?

Luckily for you, I’ve turned to former Amazon engineers who have more than 15 years of experience with AWS. Today we will introduce you only to the good parts of AWS. Let’s cut through the clutter!

Today, we’ll discuss how to make reliable technical choices for AWS. This blog post walks you through:


What is cloud computing?

AWS is one of the most popular cloud computing platforms on the market today. So, what is cloud computing exactly?

Cloud computing is a type of computing that relies on the internet (the cloud) to deliver all computing services: cloud storage, database services, servers, networking, and more.

This allows you to run your workloads remotely using the provider’s data center. The key benefit of cloud computing is agility. In other words, a team can manage their own network and storage resources using prebuilt, speedy services. Typically, a customer pays for cloud services as-needed.

Benefits of Cloud Computing

There are many benefits to using cloud computing. In fact, it is becoming the norm as we move to a cloud-based world. Let’s take a look at some of the benefits of cloud computing.

Cloud computing alleviates the drawbacks of traditional databases. In the past, each company had its own database, server, and database administrators. This requires a lot of infrastructure and makes data more vulnerable to breaches. With cloud computing, companies of all sizes can use an application-based software infrastructure to store data, allowing for multiple copies to be safely stored.

Easier to update software. With a cloud system, it is much easier to manage and update software quickly and in real-time.

Cloud computing democratizes access to technology. In the past, isolated databases made it hard for small companies to get started. Now, companies don’t need specialized computers or on-premise data centers. Cloud computing makes it possible to access the latest software without the hassle.

Businesses can focus on their needs. Cloud computing allows companies to personalize their database and server needs. Businesses can hand-select the applications and services they need and save money on the ones they don’t.

Fast speed of development. Development is notoriously slow, but with cloud computing, companies can get started quickly on updates, revamps, and building.

Alt Text


What is AWS?

AWS is the cloud computing service provided by Amazon. It offers inexpensive, reliable, and scalable web services for companies of all sizes. AWS includes four core services, which are a combination of IaaS and PaaS.

Compute: where you create and deploy your virtual machine. A virtual machine is a computer that is hosted over the cloud. You can set up your VM with its own operating system, style, RAM, and software.

Storage: AWS offers several kinds of storage services depending on your needs: S3, FSx, Elastic FilesSystem, and more.

Database: AWS offers five databases: RDS, Amazon DynamoDB, Neptune, ElasticCache, and Aurora.

Network: AWS provides many services for handling networks: CloudFront, VPC, Direct Connect, Load Balancing, and Route 53.

AWS also offers services for Identity, Compliance, Mobile, Routing, IT infrastructure services, Internet of Things (IoT) services, Machine Learning, and Security. There are more than 175 services and developer tools offered by AWS. A basic introduction to AWS is useful for understanding these tools.

What is AWS used for?

AWS can be used for just about anything, from enterprise to start-ups to the public sector. Some common uses are application hosting, web development, backup and storage, enterprise IT, and content delivery. Companies and organizations including Expedia, Shell, the FDA, AirB&B, Lyft, and more use AWS.

For our global market, AWS is commonly used to speed up time-to-market and create a standardized environment. Many companies nowadays are spread across multiple countries, and AWS enables digital marketing, scaling, and swift deployment rollouts that span the world.

AWS vs. Competitors

AWS isn’t the only cloud computing service out there. Microsoft Azure and Google Cloud (GCP) are the other two leading vendors. IBM also has a less popular cloud computing service. Let’s take a look at the top three to compare.

Availability zones and hosting. AWS has 66 available zones with 10+ on the way. GCP is available in 20 regions, and Azure offers 54 regions worldwide in 140 countries. The winner here is AWS.

Services. AWS offers 170+ services. Azure offers 100+ services, and Google Cloud has around 60 services. While each cloud computation vendor covers the same basic ground in terms of services (including file storage, VMware, DNS, etc.), AWS generally has a more diverse selection. The winner here is AWS, though it’s important to note that Azure has better integration with Microsoft Office tools.

Pricing model. Pricing for any cloud computation service depends on size and scope. For an instance of 2 virtual CPUs and 8GB of RAM, AWS charges $69/mo, Azure charges $70/mo, and GCP charges $52/mo. For a more large-scale instance, AWS charges $3.97/mo for 3.84TB of RAM and 128 vCPUs. Azure charges $6.79/mo for 3.89TB and 128 vCPUs. GCP charges $5.32/mo for 3.75TB and 160 CPUs. The winner here is AWS for its low-cost, though it’s important to note that Google Cloud offers a cheaper price for smaller instances and pay-per-second billing models.

Experience and infrastructure. AWS is the oldest cloud computation service on the market, meaning it has a bigger user base and community. It leads with 30% of the market share. Google Cloud and Azure are somewhat newer, so they are still doing some catch-up. They show great progress and growth. GCP comes in at 10% of the market share, and Azure with 16%. Here, AWS is the winner with over 15 years of expertise.

Advantages. The biggest advantage of the AWS cloud is its well-established cloud infrastructure and market dominance. The biggest advantage of Azure is its speed, excelling in deployment speed. The biggest advantage of GCP is security protection. The winner here largely depends on your needs and investments.

Cons. The downside to AWS is its pricing system; even though costs are lowered regularly, the system can be tricky to navigate. The downside to Azure is a lack of technical support and documentation, making it hard to get help. Google Cloud’s biggest downside is its scope; it doesn’t have as many global data centers or services as the other vendors. The winner here also largely depends on your needs and preferences.

Conclusion. AWS excels in global reach, reliable services, and flexibility. It is best for larger companies or teams that aren’t familiar with cloud-based technologies. Azure is the best for first-time cloud migration, a Windows-based organization, and startups. Google is the most eco-conscious and cost-efficient option, best for creators already familiar with cloud-based technologies.

All in all, we can see that AWS shines as a more reliable option for cloud computing services, bringing with its size more dependency, support, and overall services/scope.


How to make technology decisions

Making technology decisions can be overwhelming. From languages to databases to frameworks, there are dozens of big choices from seemingly unlimited options. So, what strategy should we use to make those assessments?

When starting a project, it’s easy to fall into the trap of optimization fallacy. This is the idea that the pursuit of optimization can undermine your project. We believe that finding the best option will lead to best results, but this is a trap.

  • Firstly, the products and tools considered to be the “best” are often the most expensive.
  • Secondly, the search for the “best” solution is delusional, either because it doesn’t really exist or because you don’t have enough knowledge to make a proper assessment.

A better strategy is the default heuristic, as defined by Daniel Vassallo. Sticking with yours defaults is the best solution. All you need to do is find the solution that is good enough to get the job done. The default option is one that proven to be reliable and to generate the most confidence. In other words:

  • a tool that you understand well
  • a tool that is unlikely to fail you
  • a tool that has proven the test of time

Take off the expectation that you need the “best”. Instead, seek the dependable tools. Deviating from your defaults should be reserved for unique cases. This is especially true for Amazon AWS. With these services, misuse becomes dangerous and expensive. Using a tool you don’t understand means you’ll be paying more and falling short on your plans.

Let's apply this logic to AWS and get to know the essential services based on the defaults and experiences of AWS developers.

Keep the learning going.

Learn the best parts of AWS without scrubbing through videos or scattered articles. Educative's text-based courses are easy to skim and feature live coding environments - making learning quick and efficient.

The Good Parts of AWS: Cutting Through the Clutter


The Essential Parts of AWS

AWS clearly has a lot to offer. And like any tool, seasoned AWS developers know which services are good and which are worth ignoring. Many teams starting out are unsure where to invest time and money. So, let's turn to Daniel Vassallo, a former Amazon engineer with more than 15 years of experience with AWS. He helped to cut through the clutter. So, let’s take a deeper dive into the good parts of AWS!

Database: Amazon DynamoDB

Alt Text

AWS offers many options for databases, but the best option is Amazon DynamoDB. Think of DynamoDB as a highly-durable data structure in the cloud that can replace a relational database. It shares some similarities with Redis, but DynamoDB is far more consistent and centered around a single data structure.

Since it doesn’t use a strict schema, it can manage structured/semi-structured data, even JSON documents. DynamoDB enables you to access data instantly and is excellent for web-scale applications, such as media sharing, social networks, and gaming.

The main characteristics of DynamoDB are:

  • Stored on SSD
  • Strongly consistent reads
  • Spread across 3 data centers
  • Read consistency by default

DynamoDB uses a per-request pricing structure, but it’s not the cheapest option. The biggest drawback to consider with DynamoDB is query processing. DynamoDB requires that you do data querying yourself within the application, unlike a relational database, which runs its queries close to the data.

Another option is Aurora, but it is still young and comes with poor documentation and community support. If you’re looking to get started with a reliable database, go for one that has already passed the test time.

Pro tip: When using DynamoDB, turn on point-in-time backups.

Storage: Amazon S3

Alt Text

Use S3 for data storage. Think of Amazon S3 as a highly-durable hash table in the cloud. You can expect downloads of 90 MB/s per object and about a 50 ms first-byte latency, and it costs only 2.3 cents per GB/mo. S3 offers infinite bandwidth, and you can store as many objects as you want without performance issues.

The key benefits of S3 are:

  • Inexpensive. S3 costs $25.55/TB/month with a very reliable default storage class.
  • Easy to set up and use. Known for being a simple storage service.
  • Infinite bandwidth.
  • Infinite storage space.

The biggest drawback to consider with S3 is that buffering can reduce data durability. However, there is a solution to this issue using a durable queue, such as Amazon Kinesis.

Pro tip: Don’t use the reduced redundancy option.

Route 53

Alt Text

Route 53 is a DNS service, which allows you to translate domain names into IP addresses. Route 53 is simple and reliable with only a few minor inconveniences (such as a lack of DNSSEC support). The key benefits of Route 53 are:

  • Integrates well with load balancers and ELB: Route 53 connects user requests to infrastructure, including load balances, S3 buckets, and more.
  • Health checks: Route 53 can be configured to implement health checks to monitor the health of your application and endpoints.
  • Simply visual editor: Traffic Flow has a simple visual editor so anyone can manage how users are routed.
  • Flexible: Route 53 can be configured to multiple traffic policies and routes traffic based on multiple criteria.
  • Highly available: easy to get, use, pay for, and scale

Media Service: Kinesis

Alt Text

Think of Kinesis as a high-durable linked list in the cloud. Kinesis has many advantages to other AWS media services, including:

  • Multiple consumers. Kinesis can have multiple consumers that don’t affect each other
  • Stability. Kinesis records are added to a list in a stable order and aren’t deleted from the queue. When consumers read data, they receive records in the same order
  • Cost-effective. A Kinesis stream can handle 1 KB of messages at a rate of 500 messages per second for just $0.96/day.

The biggest drawback of Kinesis is that it can be tricky to use. A Kinesis stream is made up of shards (slices of capacity), and your team has to monitor, add, and manage those shards. This can be an operational burden.

Compute: Lambda and EC2

Alt Text

Think of AWS Lambda as the code runner in the cloud. It is a serverless computing service, or FaaS (Function as a Service). It supports a wide array of potential triggers: HTTP requests, customer emails, client device synchronization, and more. AWS Lambda allows you to focus on your core product and business logic.

Lambda is great for small code snippets that rarely change, so think of it as part of your infrastructure or plugin system for other services.

Pro tip: The trick to Lambda is to treat it as a simple code runner rather than as a general purpose application host. Any other uses make it challenging and daunting to implement.

AWS also offers Amazon EC2 for compute and auto scaling. EC2 allows you to get a complete computer in the cloud in seconds. One of the advantages to EC2 as compared to Lambda is that you don’t have to adapt your application to your host; you can run software on EC2 without needing to make changes.

EC2's pricing model is also excellent: you only pay for the number of seconds your instance is running, and there are many additional savings plans.

Amazon EC2 offers more than 250 instance types that are optimized for different needs. The network security provided by EC2 is the most daunting aspect of this service; there are many options to choose from, but the default option is good enough for most purposes.


The “Bad” Parts of AWS

We can’t discuss the good parts of AWS without looking at the "bad" stuff too. I don’t mean that these tools aren’t valuable or powerful, but for many cases, they are simply too complex. Remember: the pursuit of the “best” tool may actually be a hindrance. Some of these services fall into that category. Let’s take a look.

Cloudwatch

Cloudwatch is a monitoring and observability service. It provides actionable insights for application monitoring and system-wide performance changes. While Cloudwatch is a powerful tool, it is not great for distributed systems, especially if they are spread across multiple geographic regions. In order to make Cloudwatch usable for these situations, it becomes overly complex and requires lots of effort for little reward. Cloudwatch doesn’t work well with things that will modify with time, such as autoscaling settings. If it’s too much of a problem, just don't use it.

Kubernetes and Docker

Kubernetes and Docker are powerful tools, and they are far from bad services. However, they are complex and come with a notable learning curve. The main value they bring is the ability to scale, but the process of learning and integrating these services can lead to frustration and a lack of agility/flexibility. Having this extra layer is probably not worth it.

Amazon CodeCommit

CodeCommit is a manager source control service used for scaling and hosting Git-based repositories. Though the pricing is reasonable and storage is unlimited, many developers tell you not to use it. CodeCommit adds unnecessary complexity, such as a complex authentication process, randomized credentials, and length processes to resolve common issues. Instead, stick to what you know: GitHub, which is reliable, well-known, and well-priced. You just don’t need CodeCommit.


Alt Text

Start your own AWS project from scratch

Now that you know what AWS has to offer, let’s make a basic web application. Normally, you build an application and its infrastructure step-by-step and tailor a single aspect to your needs. Today, I’ll show you how to start a simple web application and write a Hello World application.

Note: The code from this exercise comes from The Good Parts of AWS by Daniel Vassallo.

To create our application, we need to install git and npm. We create a bare-bones application and a git-repository for storage.

mkdir aws-bootstrap && cd aws-bootstrap
git init
npm init -y
Enter fullscreen mode Exit fullscreen mode

terminal

Our application will wait for HTTP requests on port 8080 and respond with “Hello World”. The application is on one file, server.js.

const { hostname } = require('os');
const http = require('http');
const message = 'Hello World\n'; 
const port = 8080; 
const server = http.createServer((req, res) => { 
  res.statusCode = 200;
  res.setHeader('Content-Type', 'text/plain');
  res.end(message);
});
  server.listen(port, hostname, () => {
    console.log(`Server running at http://${hostname()}:${port}/`);
});
Enter fullscreen mode Exit fullscreen mode

We can run the application with the node command and test it with curl from another terminal.

node server.js
Server running at http://localhost:8080/
Enter fullscreen mode Exit fullscreen mode
curl localhost:8080
Hello World
Enter fullscreen mode Exit fullscreen mode

Now, we can use a process manager, so that if our application crashes, it will automatically restart. We do this by modifying our package.json file.

{
  "name": "aws-bootstrap",
      "version": "1.0.0",
      "description": "",
      "main": "server.js",
      "scripts": {
  "start": "node ./node_modules/pm2/bin/pm2 start ./server.js --name hello_aws --log ../logs/app.log ", 
  "stop": "node ./node_modules/pm2/bin/pm2 stop hello_aws", 
  "build": "echo 'Building...'"  },
  "dependencies": { 
    "pm2": "^4.2.0" 
  } 
}
Enter fullscreen mode Exit fullscreen mode

package.json

In the terminal, we use a npm to get the dependency that was added in package. Json.

npm install
Enter fullscreen mode Exit fullscreen mode

We now want to create our directory manually. This prevents the logs directory from being deleted.

mkdir ../logs
Enter fullscreen mode Exit fullscreen mode

Now, we can start the application with the process manager.

npm start

[PM2] Applying action restartProcessId on app [hello_aws](ids: [ 0 ])
[PM2] [hello_aws](0) 
[PM2] Process successfully started
┌────┬────────────┬───────┬──────┬───────────┬──────────┬──────────┐
 id  name        mode        status     cpu       memory   
├────┼────────────┼───────┼──────┼───────────┼──────────┼──────────┤
 0   hello_aws   fork   0     online     0%        8.7mb    
Enter fullscreen mode Exit fullscreen mode

terminal

Let’s see how that all works together.

const { hostname } = require('os');
const http = require('http');
const message = 'Hello World\n'; 
const port = 8080; 
const server = http.createServer((req, res) => { 
  res.statusCode = 200;
  res.setHeader('Content-Type', 'text/plain');
  res.end(message);
});
  server.listen(port, hostname, () => {
    console.log(`Server running at http://${hostname()}:${port}/`);
});
Enter fullscreen mode Exit fullscreen mode

server.js

{
  "name": "aws-bootstrap",
      "version": "1.0.0",
      "description": "",
      "main": "server.js",
      "scripts": {
  "start": "node ./node_modules/pm2/bin/pm2 start ./server.js --name hello_aws --log ../logs/app.log ", 
  "stop": "node ./node_modules/pm2/bin/pm2 stop hello_aws", 
  "build": "echo 'Building...'"  },
  "dependencies": { 
    "pm2": "^4.2.0" 
  } 
}
Enter fullscreen mode Exit fullscreen mode

package.json

Output: Hello World

We commit all our changes to git.

git add server.js package.json package-lock.json

git commit -m "Create basic hello world web application"
Enter fullscreen mode Exit fullscreen mode

Now you have started a basic web application using AWS! It’s time to move onto more advanced concepts and build the rest of your infrastructure. Check out the resources to see how to create automatic deployments, load balancing, network security, scaling, and much more.

Wrapping up and Resources

Now you are familiar with cloud computing, the best of AWS services, and the basics of making a web application. You’re ready to get started on your own!

Daniel Vassallo’s course The Good Parts of AWS: Cutting Through the Clutter walks you through everything you need to get started with AWS in the most efficient way. He introduces you to the features of AWS that form the backbone of the internet. By the end of the course, you’ll create a fully functioning web application with personalized AWS services.

Other essential resources

Top comments (1)

Collapse
 
andrewbrown profile image
Andrew Brown 🇨🇦

The Bad Parts was not very descriptive. Its saying CloudWatch is bad without really saying why it's bad.

EventBridge is CloudWatch Events and it great for distributed systems. I don't believe it limits you to a specific geographical region or creates more friction per region. So what are they even talking about here?