DEV Community

Cover image for The Easiest Way to Run Microservices: Comparing AWS and Heroku
Jason Skowronski for Heroku

Posted on

The Easiest Way to Run Microservices: Comparing AWS and Heroku

It might be a truism, but the companies that perform the best are ones which execute the fastest and deliver the best customer experience. For developers, that means streamlining their daily workflow so they can ship features faster. In the world of modern microservices, developers can ship faster because they break up complex monoliths into smaller, more manageable services. However, operating a microservice architecture is no easy task either. When you operate a growing list of services, you need a way to quickly get them running and push updates continuously.

One of the most popular platforms to run any kind of online service is AWS. While it’s a standard choice for many companies, how does it compare against other options? While others have written high-level comparisons of AWS and Heroku, let’s look at a real-world example of how much effort it takes to get a microservice running on both. Let’s look at the exact steps required to install the same app on both platforms, so we can see which is the easiest.

The Complexities of Deploying Highly-available Microservices

Going from an idea to a URL involves many steps, especially if you want your service to automatically scale and be fault-tolerant. Setting up servers and databases are just the first layer in the stack you will have to configure. In a deployment with fault-tolerance, ensuring your server configurations are identical and databases are in sync across regions can be challenging. You need to configure a VPC with specific subnets or set up routing groups due to possible compliance requirements. User authentication and key management is also another component that will need constant maintenance. Then there is DNS management, autoscaling, failover configuration, OS configuration, logging, and monitoring that all need to be set up in a typical EC2 environment. All the components outlined in the pipeline below will need to be configured and constantly maintained. You can see how this can quickly get out of hand with a large environment.

Development and deployment tasks

The Test Case

We’ll compare the steps involved in setting up services on AWS EC2 and Heroku, so you can see which will save you the most time. If you haven’t heard of Heroku before, it’s a platform for running web apps or web services. You can think of it as a layer built on top of AWS that is designed to make deploying, operating, and iterating on web apps easier. They are also compatible with each other, so you can have a hybrid environment with some services running on Heroku and others on EC2.

To make a fair comparison, we’ve created a simple application and will deploy it on both platforms. It allows you to submit orders for processing, which uses a simple text form and displays a list of the submitted orders.

RESTful Order Interface

We’ll use a LAMP architecture since it’s common to many apps with a web frontend and a database. We’ll also add a backend service to show the deployment process for microservices. The servers will use code from the same GitHub repository. We’ll use features from both platforms to implement a solution that is fault-tolerant and scalable. The services and database should be resilient to failures and downtime. This is implemented differently on each platform, so you’ll see how they compare.

Steps in Setting Up Services on AWS

If you’ve previously deployed applications on AWS with high availability, then this setup will probably be familiar to you. We’ll provide an overview of the steps for comparison. The environment in AWS will require several components and will start with creating two AWS EC2 instances in the default VPC, inside separate availability zones. Traffic will be distributed to the first available instance in each availability zone using Route 53 with an Elastic Load Balancer. A Multi-AZ MySQL master database will support the backend of the application with an active failover standby instance in a separate availability zone. We have included a topology of the simple microservice application below.

Aarchitecture diagram on AWS

First, we will deploy the PHP application that will allow us to take in some orders, and then we will provision an independent order status checking microservice. We will start with the default VPC and subnets. Out of the box, Amazon Linux EC2 instances will support PHP applications with the least amount of configuration.

To be concise, we’ll provide an overview of the necessary steps. Each step involves many sub-steps, and we’ve provided links to AWS documentation if you need more detail on how to perform them. If you’re an expert at AWS, you’ve probably performed similar steps many times.

Step 1: Create the first two micro AWS Linux instances, one in each availability zone. These servers will host the primary UI for the order processing service.

Step 2: Create a new RDS MySQL database. Specify that this will be a public Multi-AZ deployment.

Step 3: Now that we have the Linux and MySQL instances running, let's connect our EC2 instances and begin the installation of dependent software like PHP and Apache.

Step 4: Enable the mod_rewrite module within Apache. This will allow the application to have clean URLs when creating the calling the REST URL later.

Set Up the API

Step 5: Now we can upload the site content to the web directory of each server and begin development. Upload the site files from our demo app GitHub repository.

The api.php file will provide the interface that will allow us to query the backend database from the UI. The db.php file will make a direct connection to the RDS database. Be sure to use the endpoint URL that was saved after the RDS database that you created earlier. To validate, first we can insert some dummy data into the database with the following MySQL command:

INSERT INTO `transactions` (`id`, `order_id`, `amount`, `response_code`, `response_desc`) VALUES
(1, 15478952, 100.00, 0, 'PAID'),
(2, 15478955, 10.00, 0, 'PAID'),
(3, 15478958, 50.00, 1, 'FAILED'),
(4, 15478959, 60.00, 0, 'PAID');

This command will create four order entries in the database with the order status, PAID or FAILED. Now that there is data in the database, we will need to test the frontend UI to confirm that the queries to the database are successful.

Step 6: Before we can check, we need to create a load balancer, and add each server to the target group under the specific availability zone.

Step 7: Navigate to Route 53 and create an A record for your domain. In this case, order.apexcloudnetworks.com will be used. Then, select the Elastic Load Balancer for the value, and failover for the Routing Policy.

When the A record is complete, you can now navigate to the main frontend URL to query the API for an Order. To query the API, enter the URL request in the following format:

http://order.domain.com/api.php?order_id=15478958

You can then see the response returned by the API with Order ID, Amount, and status:

JSON order response

At this point, you have a completed application. The load balancer is routing requests across multiple servers within two separate availability zones. Should a server or availability zone fail, it will be backed up by identical servers and a database in an alternate zone.

Building the Microservice

Now we’ll add an independent microservice to check order status. A microservice will give our development teams the ability to streamline application development and eliminate dependencies.

Step 8: To begin building the microservice, start by provisioning identical EC2 Linux instances in the same default VPC, one in each availability zone.

Step 9: On the new EC2 instances, be sure to install the same dependent packages.

Step 10: Upload the same .htaccess, index.php, api.php, and db.php files to the new servers.

Step 11: Add the two new servers to the Load Balancer target group. After the servers have been added, we can set up a new URL for the microservice API.

Step 12: From Route 53, point the new check domain at the load balancer that was created in step 6. Be sure to set the routing policy to failover.

Step 13: Navigate to the check domain, validate the API is querying the read replica database successfully, and records are returned.

More order JSON

With the API check a success, we can finish building out a microservice user interface to make checking orders a little easier. The index.php file downloaded from the repo in step 4 will create an HTML form that will query for order status in the read-only database. The .htaccess file will allow Apache mod_rewrite, which you enabled earlier, to present clean URLs when searching from the microservice UI.

Step 14: Navigate to the check URL, and you will see the updated HTML form. Enter one of the sample search IDs to query for order status:

Check order

Now you have an independent microservice that can also be called from any other service that is added to the environment. Plus, you have also improved availability and redundancy. Even if you shut down the primary server or the Master database, the second server will remain online and will continue to serve order status requests from customers.

We know this process involves multiple steps. Aside from using an automated configuration management solution like CloudFormation, which also takes extra work to configure, there is no easy way around these steps on AWS. Over time, it becomes routine chore work.

Steps in Setting Up Services on Heroku

Now let’s compare how many steps it takes to create the same system using Heroku.

Step 1: We will use the same PHP application above to deploy on Heroku. Create a new Heroku account and begin by setting up the CLI by following the PHP getting started instructions. While still in the existing application code directory, create the Heroku application with the following command:

heroku create devorders

Check the URL created by Heroku. You will see the default application created for you. Wow! That’s only one command, and we have a basic server running! Pretty cool, right?

Empty Heroku app web page

Setting Up the Database

Step 2: To provision the database, you will need to use the ClearDB MySQL plugin for Heroku. Provision a shared MySQL instance with the following command:

heroku addons:create cleardb:ignite

ClearDB, one of Heroku’s add-on partners, will automatically create a multi-instance database cluster in separate availability zones to guarantee data availability. Now get the database connection details with the following command:

heroku config | grep CLEARDB_DATABASE_URL

Using the connection details obtained from the command above, import the transactions SQL file included with the application. Log into the database and confirm the table has been created.

MySQL CLI database setup

Committing the Code

Step 3: Update the api.php file with the application URL and replace the contents of the db.php with the following code:

 <?php
$url = parse_url(getenv("CLEARDB_DATABASE_URL"));

$server = $url["host"];
$username = $url["user"];
$password = $url["pass"];
$db = substr($url["path"], 1);

$conn = new mysqli($server, $username, $password, $db);
?>

You will not need to hard code any usernames or passwords in this configuration file. Heroku stores this data for you in the CLEARDB_DATABASE_URL environment variable.

Step 4: At this point, you are ready to commit the code with the following command:

git push heroku master

PHP app building on Heroku

Navigate to the devorders.herokuapp.com URL that has been created to validate the application has been published successfully. You can see we are successfully connected to the database retrieving data from the API call.

Okay, so now our frontend is running. You’ll notice we didn’t configure any DNS or VPC settings, and we didn’t manually upload any code to the server. Heroku automates those steps for us.

Order JSON

Creating the Microservice

Step 5: To create the microservice, simply create a new application with the following command:

heroku create orderchecking

Step 6: Next, clone the existing application repository and cd to the new directory.

git clone

Step 7: Once in the cloned application directory, set the repository to the orderchecking repo, and deploy the code with the following commands:

heroku git:remote -a orderchecking
git add .
git commit -am "Adding a new Micro Service"
git push heroku master

Build PHP app on Heroku

Navigate to the microservice URL orderchecking.herokuapp.com/index.php to check the status of orders from the user interface.

Check order again

You can see we can successfully connect to the orders database, and call the API to check customer order status. Heroku has provisioned an isolated service that can independently call the MySQL database cluster. Developers can also work on this application and make changes without impacting the production master branch.

Now for the real magic! You can simply scale the application with a single command:

heroku ps:scale web=2

The “2” indicates the number of dynos (server instances) that you want to run your application on; no need to provision new servers, configure auto-scaling groups, or load balancers. All application traffic is automatically distributed evenly across all running instances.

Conclusion

Infographic comparing AWS and Heroku

You can see that with Heroku, we have eliminated about half of the steps versus the manual AWS deployment. Also, the steps needed to provision a Heroku environment are often one-liner commands, whereas with AWS there are many multiple commands or configurations required. On Heroku, VPC configuration and network management are all done for you, and the process of deploying code to the server. We didn’t show it, but with a single command, you can also deploy to multiple regions around the world or install hundreds of add-ons for services like monitoring, caching, and more.

On the other hand, AWS does offer a wide array of services that Heroku does not, such as data warehousing, S3 storage, and AI. Additionally, you might have teams using AWS already, and you need a way to talk to those services securely. Since Heroku runs on AWS, you can securely connect to services in AWS through VPC peering. This gives your team the flexibility to use the best of both platforms.

There are many more pros and cons we could consider when comparing running microservices on AWS vs Heroku. Really, it comes down to what the priorities are for you and your team. What pros and cons are most important to you? Why?

To learn more, check out Heroku’s Dreamforce presentation to view a great step-by-step video that outlines deploying multi-region microservices that are highly available and secure. As you watch, consider how much time would be needed to implement the same on AWS yourself.

Top comments (7)

Collapse
 
darcyrayner profile image
Darcy Rayner

This is a great article. Although, wouldn't AWS Lambda be a fairer comparison? You don't have to worry about configuring autoscaling, setting up apache, setting up the DNS, (apigateway does this for you, you can set up your own DNS if you need a custom URL), you skip all the VPC steps, (even with RDS which supports a HTTP API now). Using something like serverless framework is simple and pretty concise. It's also typically cheaper if you have an unpredictable load, because you pay per use, and don't have to over provision dynos. As for databases, with Aurora Serverless for RDS or Dynamo On Demand for NoSQL, you don't have to worry about autoscaling. I know they don't have a PHP environment for lambda,(although there are work arounds), which is the main thing that breaks the comparison. Does Heroku have a serverless offering?

Collapse
 
mostlyjason profile image
Jason Skowronski

Serverless functions are a great way to run microservices, depending on your team's needs. Serverless isn't exactly a direct comparison since Heroku offers dynos which support long-running processes and runs a full app or web server. It's more closely related to what you'd get on an EC2 instance or AWS Fargate. I figured EC2 is more widely used so I picked that for my comparison.

Lambda is geared more towards short running processes and has an execution limit of a few minutes, whereas a dyno runs continuously. This has some advantages for operations requiring more compute or high latency operations such as coordinating multiple service calls. You'll also run into fewer issues with warm up time, and is potentially lower cost for ongoing usage. It may also be quicker to port your existing service built on Django or Express to a dyno than rewrite it as as serverless function. Each team would have to consider which is best for their specific needs.

Yes Aurora Serverless and Dynamo are also great backend database options, again depending on your use case. You can call any database from Heroku so you can choose which you want to use. Traditional RDS is probably more widely used at the moment so that's why I used it for my comparison.

Collapse
 
raulc27 profile image
Raul Castro

Great article! I have my app directly attached github running on heroku, so I just push it and put focus on the private repo at github...

Collapse
 
bleidi profile image
Rafael "Bleidi" Souza

Nice article! Althougth is not fair to compare an Infraestructure-as-a-Service (Iaas) with a Plataform-as-a-Service (PaaS). I Aldo think ElasticBeanstalk could had made it fair.

Collapse
 
shenril profile image
Shenril • Edited

Thank you for the explanation!
I also feel this isn't really a fair comparison
AWS offers QuickStart or ElasticBeanstalk or Lambda that would cut in half the steps you're describing.

Collapse
 
johncmunson profile image
John Munson

Comparing to AWS Lambda would not be comparing apples to apples. The code would likely need to be modified to fit with the handler format that Lambda expects. There are cold starts to consider. The list goes on.

However, comparing to Beanstalk probably would’ve made sense.

Collapse
 
smakubi profile image
Samwel Emmanuel

Great article! Though I would caution readers on the onset that this is an apples to oranges comparison. Heroku is a PaaS, AWS is an IaaS.