Article by Jay Allen
Learning AWS is complicated enough. But learning AWS is made more challenging when you're also still grappling with some of the major concepts of DevOps software deployments. In this article, I discuss two key concepts: stacks and stages. I also address how you can manage stacks and stages in AWS, along with other factors you need to consider when managing them in practice.
Stacks
In simplest terms, a stack is a unit of application deployment. Using stacks, developers can organize all of the resources required by an application or an application component as a single unit. This enables devs to deploy, tear down, and re-deploy their applications at will.
Stacks can be stood up manually. However, it's better on cloud platforms to program the creation of your stack - e.g., using a scripting language such as Python. The ability to script stack deployments is known as Infrastructure as Code and is a hallmark of cloud computing platforms. Scripting your application deployments and bundling them into stacks reaps multiple benefits:
- Once your stack code is fully debugged, you can deploy your application repeatably and reliably. Scripting stack deployments eliminates the errors that inevitably occur in manual deployments.
- You can tear down stacks that aren't being used with a single script or command. This saves your team and company money.
- You can parameterize stacks to deploy different resources or use different configuration values. This lets you deploy multiple versions of your application. (Remember this - it'll be important soon!)
Stages
A stage, by contrast, is a deployment of your application for a particular purpose. With stages, you can deploy your application multiple times to vet its functionality with an increasingly larger number of users. Typical stages can include:
- Dev for developer coding and experimentation (only available to your dev team)
- Test for running unit tests (available to dev, test, and internal stakeholders)
- Stage for user acceptance testing (available to external alpha/beta testers)
- Prod for your publicly facing application (available to all customers)
Stages are part of CI/CD pipelines, which I've discussed in detail before. By constructing your application as a pipeline, you can "flow" app changes from one stage to the next as you test them in each environment. This lets you vet changes multiple times in limited, controlled environments before releasing them to your users.
Stacks and Stages: Better Together
Stacks and stages are a powerful one-two combination. With a properly parameterized stack, you can create whatever stages your application needs. Because you create each stage using the same source code, each stage's stack will contain the same resources and perform the same way as every other stage.
Stacks on AWS
AWS fully embraces Infrastructure as Code. Nearly anything you can accomplish manually with the AWS Management Console can also be created programmatically.
On AWS, you have several options for creating stacks.
AWS CloudFormation
AWS CloudFormation is the official "AWS way" of creating stacks. Using CloudFormation, you can write templates using either JSON or YAML that specify which AWS resources your stack contains.
CloudFormation isn't an imperative programming language like Python. Instead, it uses a declarative format for creating resources. This simplifies creating your infrastructure, as you don't need to be an expert in a particular programming language to stand up resources. Many CloudFormation templates can be constructed by making small tweaks to publicly available templates. ( AWS itself hosts many such sample templates and snippets .)
A key feature of CloudFormation is its support for parameters. Rather than hard-code values, you can declare them as parameters and supply them at run time when you create the stack in AWS. For example, the template snippet below (taken from AWS's sample template for deploying Amazon EC2 instances) defines the parameters KeyPair, InstanceType, and SSHLocation. By parameterizing these values, the same template can be used multiple times to create different EC2 instances of different sizes, in different networks, and with different security credentials.
The great thing about CloudFormation templates is that they make stacks both easy to turn on and easy to turn off. Deleting an instance of a CloudFormation template automatically cleans up the entire stack and deactivates all of its resources.
Your Favorite Programming Language
Not everyone wants to learn a new declarative language to create stacks. And some stacks might require the fine-grained control that an imperative programming language offers.
Fortunately, AWS also produces software development kits (SDKs) for a variety of languages. Developers can use Python Go, Node.js, .NET, and a variety of other languages to automate the creation and deletion of their stack.
Which is Better?
CloudFormation's major advantage is simplicity. Particularly, CloudFormation makes deleting stacks a breeze. By contrast, with a programming language, you need to program the deletion of every resource.
However, using a programming language for stack management offers much greater control than CloudFormation. For example, let's say that a resource fails to create. This can happen sometimes, not because you did anything wrong, but due to an underlying error in AWS, or a lack of available resources in your target region.
Using CloudFormation, a failed resource will result in the stack stopping and everything you've created rolling back. Using a programming language, however, you could detect the failure and handle it more gracefully. For example, you may decide to retry the operation multiple times using incremental backoff.
Your choice between CloudFormation and programming language may also be affected by feature parity. In the past, some AWS teams have released features with SDK support but no initial CloudFormation support.
Many of these issues with CloudFormation can be addressed using a hybrid CloudFormation/code approach. Using CloudFormation custom resources, you can run code in AWS Lambda that orchestrates the creation of both AWS and non-AWS resources. You can also perform other programming-related tasks that might be required for your stack, such as database migration.
In the end, both approaches work fine. My personal recommendation would be to use AWS CloudFormation in conjunction with custom resources when needed. CloudFormation is well-supported and can easily be leveraged by other AWS features (as we will see shortly).
Stages on AWS
The easiest way to manage stages on AWS is by using AWS CodePipeline.
CodePipeline performs two major services. First, it orchestrates multiple AWS services to automate every critical part of your application deployment process. Using CodePipeline, you can ingest code from your code repository (such as GitHub), compile it using AWS CodeBuild, and deploy your application's resources using (you guessed it) AWS CloudFormation.
Second (and most important for today's discussion), CodePipeline supports defining separate stages for your application. When you create a CodePipeline, you create stages that handle importing your source code from source control and building the code. From there, you can add additional deployment stages for dev, test, stage, prod, etc.
In the screenshot below, you can see a minimal deployment pipeline. The third step after the CodeBuild project is a dev stage, intended for developer vetting of new changes.
We could easily add a new stage to our pipeline by clicking Edit and then clicking the Add Stage button.
After you add a stage, you can add one or more action groups. Action groups support a large number of AWS services, including AWS CloudFormation. For our test group, for example, we could add two action groups:
- A manual approval. This would stop changes from the dev branch from flowing to test automatically until someone approved the change in the AWS Management Console (e.g., after performing a code review).
- An AWS CloudFormation template to deploy our infrastructure stack for the test stage.
When using a CloudFormation script with CodePipeline, you can specify a configuration file that passes in the parameters the CloudFormation script needs to build that stage properly. This might be as simple as prefixing created resources with the name "test" instead of "dev", or as complicated as specifying a data set to load into your database for testing.
Managing Stacks and Stages in Practice
In theory, stacks and stages are pretty simple concepts. In practice, however, it takes a lot of work and fine-tuning to get your CI/CD pipeline to the point where you can deploy your application reliably across multiple stages. Your team also needs to make some up-front decisions about how it's going to manage its source code and work product.
Below are just a few factors to consider when devising your approach to stacks and stages on AWS.
Source Code Branching
A key up-front decision with stacks and stages is how your team will flow changes from development into production. A big part of this decision is how you manage branches in source control.
There are multiple possible branching patterns. On his Web site, programming patterns guru Martin Fowler has documented the key strategies in excruciating detail. On their Web site, Microsoft offers a simpler, more prescriptive approach:
- Define feature branches that represent a single feature per branch.
- Use pull requests in source control to merge feature branches into your main branch for deployment.
- Keep your main branch clean and up to date.
This is, of course, only one way to do things. The important thing is that your branching strategy is clean, simple, and easy to manage. Complex branching strategies that require multiple merges and resolution of merging conflicts end up becoming a nightmare for development teams and slow down deployment velocity.
Unit of Deployment
Another fundamental consideration is the unit of deployment - i.e., how much of your application do you deploy at a time?
Many legacy applications deploy an application's entire stack with every deployment. This so-called monolithic architecture is easy to implement. However, it lacks flexibility and tends to result in hard-to-maintain systems.
The popular alternative to monoliths is microservices. In a microservices architecture, you break your application into a set of loosely-coupled services that your application calls. You can get incredible deployment flexibility with microservices, as you can bundle each service as its own stack. However, managing versions and service discovery in a complex Web of microservices can be daunting.
You can also take an in-between approach. Some teams divide their apps up into so-called "macroservices" or "miniservices" - logical groupings of services and apps that can each be deployed as a single unit. Such deployments avoid the downsides of monolithic deployment while also steering clear of the complexity of microservices.
Data Management
Next, there's how you'll manage data. At a minimum, your team needs to consider how to handle:
- Loading data into a dev/test/staging system for testing purposes.
- Managing schema changes to your data store (e.g., adding new tables/fields to relational database tables with a new release).
Some development frameworks, such as Django, use an Object Data Model (ODM) or Object Relational Model (ORM) that automates database migrations. In these cases, your application simply needs a way to trigger a migration using the relevant scripts. The AWS Database Blog has some detailed tips for incorporating database migrations into a pipeline.
Managing Secrets
While automation is great, it introduces a devilish problem: managing secrets. Your application can access most AWS services using an AWS Identity and Access Management (IAM) role. However, it will likely also need to connect to other resources - databases, source control systems, dependent services - that require some sort of authentication information, such as access and secret keys.
It can't be said clearly enough: storing secrets in source code is a huge no-no. And storing them in plain text somewhere (like an Amazon S3 bucket) isn't any better.
Fortunately, AWS created the AWS Secrets Manager for just this purpose. Using Secrets Manager, you can authorize your application via IAM to read sensitive key/value pairs over a secure connection. You can even use CloudFormation to store secrets for resources such as databases into Secrets Manager as part of building a stack.
Conclusion
Stacks and stages are cornerstone concepts of DevOps deployments. Once you can deploy your application as a single unit or collection of units, you can spin up any environment you need at any time. The payoff? Faster deployments and more reliable applications - and, as a consequence, happy customers!
Top comments (0)