This project leverages Terragrunt, Terraform, and GitHub Actions to deploy a basic web app (dockerized JS frontend and dockerized Python API) to AWS ECS.
Check out the full source code on GitHub: https://github.com/visini/terragrunt-github-actions-aws-ecs
Initial Setup
Terragrunt is a thin wrapper for Terraform that provides extra tools for working with multiple Terraform modules, remote state, and locking. It also provides a powerful and flexible way to hierarchically provide configuration to Terraform, without duplicating code across environments, AWS regions, and AWS accounts – keeping your Terraform config DRY.
The following hierarchy is proposed (aligned with directory structure):
-
terragrunt.hcl
with configuration for remote_state and AWS provider -
common.terragrunt.hcl
defining common, project-specific variables-
account.terragrunt.hcl
for each account -
region.terragrunt.hcl
for each region within an account-
environment.terragrunt.hcl
for each environment within a region
-
-
This allows flexible configuration, just add additional folders and adjust the configuration files, for instance configuring...
- Accounts
main
andsecondary
- Regions
eu-west-1
andus-east-1
inmain
vs.us-east-1
insecondary
- Environments
prod
inmain
regions vs.stage
anddev
insecondary
regions
Workflow via GitHub Flow
This project leverages GitHub Flow for gradually merging changes existing on experimental branches and deployed to experimental environments, towards more mature branches and environments.
The companion repository contains functionality to deploy code to AWS ECS simply by adopting GitHub Flow principles. All integration and deployment steps are managed by GitHub Actions workflows, including: Unit testing, building and pushing Docker images, and releasing new images to the correct ECS cluster via Terraform and Terragrunt. Create a branch, push, create a pull-request, and, after verifying checks, merge all changes - these are the only steps needed to deploy new features by adopting this approach.
Assuming a running staging and production environment, here's how to deploy changes made for a recent feature "foo" to staging and production environments:
Step 1 - Deployment to staging environment stage
via branch dev
- Create a new branch
feature/foo
and check it outgit checkout -b feature/foo
- Push to remote and set up to track remote branch
feature/foo
fromorigin
git push --set-upstream origin feature/foo
- Open pull request from branch
feature/foo
to branchdev
to plan deployment tostage
environment - Wait for checks to complete: Workflow
terragrunt
will post terraformplan
as a comment to pull request - Additional checks may include unit tests: See
pytest-api.yml
- After verifying terraform
plan
, merge pull request into branchdev
- Workflow
terragrunt
will run again andapply
deployment forstage
environment - Code (and infrastructure) changes to branch
feature/foo
are now released tostage
environment
Step 2 - Deployment to production environment prod
via branch main
- After verifying deployment to
stage
environment (e.g., e2e-testing), open pull request from branchdev
to branchmain
to plan deployment toprod
environment - Wait for checks to complete: Workflow
terragrunt
will post terraformplan
as a comment to pull request - After verifying terraform
plan
, merge pull request into branchmain
- Workflow
terragrunt
will run again andapply
deployment forprod
environment - Code (and infrastructure) changes to branch
dev
originating from branchfeature/foo
are now released toprod
environment
Configure Infrastructure and Deployment Targets
The hierarchical configuration via Terragrunt is enabled by a main configuration file in which all other more granular configuration files are imported. In terragrunt.hcl
, both remote state and AWS provider are defined according to values in more specific configuration files.
Both remote state and provider are dynamically defined for each deployment target (e.g., prod
vs. stage
environment) and AWS account ID (e.g., main
vs. secondary
account). This means, prod
and stage
environments (which may even be residing on two separate AWS accounts) adopt separate remote state backend configurations, depending on which environment subfolder terragrunt
commands are executed from. Review this file and the nested Terragrunt configuration files in the companion repository for the detailed implementation.
locals {
common = read_terragrunt_config(find_in_parent_folders("common.terragrunt.hcl"))
account = read_terragrunt_config(find_in_parent_folders("account.terragrunt.hcl"))
region = read_terragrunt_config(find_in_parent_folders("region.terragrunt.hcl"))
environment = read_terragrunt_config(find_in_parent_folders("environment.terragrunt.hcl"))
}
remote_state {
backend = "s3"
# remote_state dynamically configured based on:
# local.region.locals.aws_region
# local.account.locals.aws_account_id
# local.common.locals.app_name
# ...
}
generate "provider" {
# AWS provider dynamically configured based on:
# local.region.locals.aws_region
# local.account.locals.aws_account_id
# ...
}
# The following variables apply to all configurations in this subfolder
# and are automatically merged into the child `terragrunt.hcl` config
# via the include block.
inputs = merge(
local.common.locals,
local.account.locals,
local.region.locals,
local.environment.locals
)
Common variables, such as the app name, base domain name and hosted zone name, which apply to the project are configured in a separate file:
# The following define common variables for the project.
# They are automatically pulled in in the root terragrunt.hcl
# configuration to feed forward to the child modules.
locals {
app_name = "example-app"
app_domain_name = "app.example.com"
route53_hosted_zone_name = "example.com"
use_existing_route53_hosted_zone = true
github_sha = "will_be_automatically_set_by_github_actions_or_manual_script"
}
One or any number of AWS accounts may be configured, each related to any number of regions and environments to be deployed via this AWS account:
# Set account-specific variables. They are automatically
# pulled in to configure the remote state bucket in the root
# terragrunt.hcl configuration.
locals {
account_name = "main"
aws_account_id = "123456789000"
aws_profile = "default"
}
Similarly, the region configuration is provided in the nested level:
# Set region-specific variables. They are automatically
# pulled in to the root terragrunt.hcl configuration to
# feed forward to the child modules.
locals {
aws_region = "eu-west-1"
}
Environment configuration regarding both infrastructure and containers are provided in the nested level. To illustrate a common use case: The stage
environment may override variables previously defined in the parent-hierarchy, in tf/common.terragrunt.hcl
, and for instance add a prefix stage.*
to app_domain_name
.
# Set environment-specific variables. They are automatically
# pulled in to the root terragrunt.hcl configuration to
# feed forward to the child modules.
locals {
common = read_terragrunt_config(find_in_parent_folders("common.terragrunt.hcl"))
account = read_terragrunt_config(find_in_parent_folders("account.terragrunt.hcl"))
region = read_terragrunt_config(find_in_parent_folders("region.terragrunt.hcl"))
# Configure environment
environment = "stage"
app_domain_name = "stage.${local.common.locals.app_domain_name}"
app_name = local.common.locals.app_name
aws_account_id = local.account.locals.aws_account_id
aws_region = local.region.locals.aws_region
parameter_group = "${local.app_name}/${local.environment}"
service_configuration = {
# ...
}
}
Configure Container Environment and Secrets
Environment variables for the respective deployment target (e.g., for stage
environment) are provided alongside terragrunt configuration in JSON files, following the naming .service.environment.json
, and by specifying both keys and values. These files are committed to source control, since they do not contain any sensitive data. Adding a description key-value pair will inform development and ensure consistency of variable assignment.
{
"DEBUG": {
"value": "true",
"description": "API debug environment"
}
// ...
}
Similarly, JSON files following the naming .service.secrets.json
provide keys (not values) for all container secrets, which are injected by AWS Systems Manager Parameter Store into the service's ECS tasks (containers) as environment variables. Adding a description key-value pair will inform development and ensure consistency of variable assignment.
No secrets are present in code or source control — secrets such as database passwords or secret keys are generated as terraform resources, and stored within Systems Manager Parameter Store. Following the design of terraform state, they are additionally stored within remote state backend. While S3 backend supports encryption at rest, remote state is to be considered sensitive data.
{
"SECRET_KEY": {
"description": "Secret key required for API JWT authentication"
}
// ...
}
Integration via GitHub Actions – Pytest
Easily add continuous integration workflows for unit, integration, and e2e tests by adding a GitHub Action workflow. The companion repository includes an example for testing the API service via Pytest.
name: Pytest API
on:
pull_request:
jobs:
# ----------------------------------------------------------------
# Checkout code, install dependencies, run pytest for API
# ----------------------------------------------------------------
pytest: #...
Deployment via GitHub Actions – Terragrunt
To support branch-aware continuous deployment of code to the respective environment, a GitHub Actions workflow is provided.
The workflow is based on Terraform's guide for automating Terraform with GitHub Actions, and adds support for nested configuration via Terragrunt. With this workflow, pull-requests will trigger terraform plan
, and merging these pull requests will trigger terraform apply
to deploy to the correct environment: Branch dev
will deploy to environment stage
and branch main
will deploy to environment prod
– all following the specified hierarchical configuration defined in *.terragrunt.hcl
files. Review this file in the companion repository for the detailed implementation.
name: Terragrunt
on:
push:
branches:
- main
- dev
pull_request:
jobs:
# ----------------------------------------------------------------
# Checkout code, setup target_push and tf_env
# Configure AWS credentials via secrets in repo settings
# ----------------------------------------------------------------
setup: #...
# ----------------------------------------------------------------
# Push to ECR if target_push
# Tag image with github.sha of commit
# ----------------------------------------------------------------
ecr: #...
# ----------------------------------------------------------------
# Terraform for deployment targets
# Based on tf_env (pull request -> plan; push to branch -> apply)
# ----------------------------------------------------------------
terraform: #...
Conclusion
This article describes a flexible CI/CD workflow for AWS ECS based projects. Using GitHub Flow, changes to the infrastructure and/or codebase are deployed to the intended deployment targets.
Thanks for reading. I'm curious to hear your thoughts on this topic – don't hesitate to reach out to me on Twitter or start a discussion in the companion repository on GitHub!
Top comments (3)
I have been using Terraform for years and this is the first time I have the need to use something like Terragrunt for multi-regions. Your blog post was useful for me because I am not used to Terragrunt's folder structure and I am considering using it as a reference. Really appreciate.
Agreed, great article
Let me know what you think – I'm looking forward to your comments!