DEV Community

Cover image for Terraform modules and GitHub Actions to deploy secure cloud infrastructure
Fran
Fran

Posted on

Terraform modules and GitHub Actions to deploy secure cloud infrastructure

Countless organizations struggle with standardizing the provisioning of cloud resources, eventually resulting in a cloud infrastructure chaos. Resources are created either programmatically or manually from the cloud provider console, without a prior automated security configuration review. Hence, multiple vulnerabilities are introduced into cloud environments, ultimately generating an extra effort for Cloud Security teams, who will then have to work on the remediation steps.

This post aims to standardize the deployment of resources to AWS (although it can be extrapolated to any other cloud provider) in a secure and automated fashion by reusing pre-defined Terraform modules and GitHub Actions. Essentially, as popularly said, "shifting left", moving security sooner in the development process to prevent insecure resources from being deployed in first place.

Following this approach, all Cloud Security teams need to worry about is defining the appropriate security best-practices for the different cloud resource types, and writing up the Terraform modules with such security features pre-enforced. The GitHub Actions CI/CD will take care of the rest. If the resource is not compliant with the defined rules, it will not be deployed, period.

Refer to HashiCorp's documentation to learn more about creating reusable Terraform modules.

Enough theory, let's go into action!


Creating our reusable Terraform module

We will need a GitHub repository to store our Terraform modules. Let's call it aws-terraform-modules. For this post's purpose, let's pretend that we are creating a module for deploying AWS Systems Manager Parameter Store parameters. Within our repository, we will create a directory called ssm-parameters. Inside such directory, at a minimum, we will need two files: variables.tf and main.tf.

Here is an example of the variables.tf file:

variable "name" {
    type = string
    description = "Display name of the SSM Parameter Store parameter name."

    validation {
        condition = (length(var.name) >= 1 && length(var.name) <= 2048 && can(regex("^/(test|uat|prod)/", var.name)))
        error_message = "SSM parameter names must be between 1 (min) and 2048 (max) characters long and follow the naming convention (test|uat|prod)/."
    }
}

variable "description" {
    type = string
    description = "Description of the SSM Parameter Store parameter as viewed in the AWS console."
}

variable "value" {
    type = string
    description = "Value of the SSM Parameter Store parameter. If parameter type is SecureString, the value should be retrieved from a GitHub secret."
}

variable "key_id" {
    type = string
    description = "Customer managed KMS key id or arn used to encrypt the SSM Parameter Store parameter if type is SecureString."
    default = "alias/aws/ssm"
}

variable "tier" {
    type = string
    description = "Tier of the SSM Parameter Store parameter. Valid types are Standard, Advanced and Intelligent-Tiering."
    default = "Standard"

    validation {
        condition = (contains(["Standard", "Advanced", "Intelligent-Tiering"], var.tier))
        error_message = "Unsupported SSM parameter tier. Valid tiers: [Standard, Advanced, Intelligent-Tiering]."
    }
}

variable "tags" {
    type = map(string)
    description = "Key-value map of resource tags to be associated with the SSM Parameter Store parameter."

    validation {
      condition = (
                    contains(keys(var.tags), "owner") &&
                    contains(keys(var.tags), "env")
      )
      error_message = "Incomplete or invalid set of tags specified for the SSM Parameter Store parameter. Tags for every resource are required to have the following keys:\n\t- owner\n\t- env."
    }
}
Enter fullscreen mode Exit fullscreen mode

The validation block can be used to specify custom conditions, based on our security standards, and produce error messages if the condition evaluates to false. Additionally, you can provide the default value for non required variables, as seen in the snippet above.

As you would expect, a main.tf file example is as follows:

resource "aws_ssm_parameter" "ssm_parameter" {
    name        = var.name
    value       = var.value
    description = var.description
    type        = "SecureString"
    key_id      = var.key_id
    tier        = var.tier
    tags        = var.tags
}
Enter fullscreen mode Exit fullscreen mode

All we are doing there is calling the Terraform resources as specified in HashiCorp's documentation. In this case, the aws_ssm_parameter resource.

That's all we need to configure our reusable Terraform module. In our example, we are basically enforcing a naming convention and the use of specific tags, as well as only allowing the creation of encrypted parameters (using the "SecureString" type).

Let's now jump onto how to use it to deploy new AWS SSM parameters from GitHub Actions!


Using our Terraform module

Calling our reusable Terraform module from another GitHub repository where we maintain the actual AWS resources code is as simple as the following:

module "test-ssm-parameter" {
    source      = "github.com/<org-name>/aws-terraform-modules//ssm-parameters"
    name        = "/test/test-parameter"
    description = "CloudSec test SSM Parameter Store parameter."
    value       = "test-ssm-parameter-value"
    tags        = {
        owner    = "org-owner",
        env      = "test"
    }
}
Enter fullscreen mode Exit fullscreen mode

Note that, in our snippet sample, we are exclusively specifying the values for the required variables. All other variables from our module not explicitly assigned will take their default value (type, key_id and tier). We could, of course, overwrite those optional variables, as long as the new values meet the pre-defined validation criteria.

Easy, right? Let's take a look at how we can automatically test that our SSM parameter configuration is valid!


Configuring our GitHub Workflows

The last piece of the puzzle is configuring our Terraform CI/CD pipeline. Assuming our organization follows a mainline development model, the CI/CD workflow would look similar to the following:

  1. A terraform validate and terraform plan are triggered every time a developer opens a new Pull Request (PR) to the main branch, or when an open PR is updated. This step essentially verifies that our Terraform configuration is correct and compliant with the validation rules defined for our module's variables, and outlines the terraform plan. If the checks fail, we will see a detailed error of what's wrong so we can go and update our PR to fix it, triggering a new Workflow. And, of course, we won't be able to merge our PR and, thus, deploy our AWS resources, until our configuration is compliant.

  2. If the plan looks as expected and all the validation checks pass successfully, we shall be able to merge our PR. As soon as the PR is merged, a terraform apply is kicked off, making the corresponding changes in our AWS cloud environment. This ensures that everything that is on our main branch is in sync with what is deployed to production.

So, how do translate those two steps into GitHub Workflows?

You will need a GitHub Personal Access Token (PAT) with rights to pull the code from the aws-terraform-modules repository and AWS access keys with the appropriate permissions to deploy the resources to AWS. We are storing such credentials in GitHub Secrets.

Knowing that GitHub Workflows are defined in the .github/workflows directory within your repository, here are the templates of the two Workflows that we will be creating:

terraform-plan.yml:

name: Terraform Plan
on:
  pull_request:
    branches:
      - $default-branch

env:
  TF_VERSION: 1.1
  TF_VAR_aws-access-key: ${{ secrets.AWS_ACCESS_KEY_ID }}
  TF_VAR_aws-secret-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
  TF_VAR_terraform-state-bucket: 's3-terraform-state-files'
  TF_VAR_terraform-state-bucket-namespace: /
  TF_VAR_terraform-state-bucket-key: 'aws-env/terraform.tfstate'
  TF_VAR_terraform-state-bucket-aws-region: 'us-east-1'
  TF_VAR_terraform-dynamodb-state-locking-table-name: 'dynamodb-terraform-state-files'

jobs:
  terraform:
    name: Plan
    runs-on: ubuntu-latest

    steps:
      - name: Checkout
        uses: actions/checkout@v2

      - name: GitHub Auth
        run:  
          git config --global url."https://oauth2:${GITHUB_TOKEN}@github.com/<org-name>".insteadOf "https://github.com/<org-name>" 
        env:  
          GITHUB_TOKEN: ${{ secrets.GITHUB_PAT }}

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v1.2.1
        with:
          terraform_version: ${{ env.TF_VERSION }}

      - name: Terraform Get Update
        run: terraform get -update

      - name: Terraform Init
        run: |
          terraform init \
          -backend-config="dynamodb_table=${{ env.TF_VAR_terraform-dynamodb-state-locking-table-name }}" \
          -backend-config="access_key=${{ secrets.AWS_ACCESS_KEY_ID }}" \
          -backend-config="secret_key=${{ secrets.AWS_SECRET_ACCESS_KEY }}" \
          -backend-config="bucket=${{ env.TF_VAR_terraform-state-bucket }}" \
          -backend-config="key=${{ env.TF_VAR_terraform-state-bucket-key }}" \
          -backend-config="region=${{ env.TF_VAR_terraform-state-bucket-aws-region }}"

      - name: Terraform Validate
        run: terraform validate

      - name: Terraform Plan
        run: terraform plan
Enter fullscreen mode Exit fullscreen mode

terraform-apply.yml:

name: Terraform Apply
on:
  push:
    branches:
      - $default-branch

env:
  TF_VERSION: 1.1
  TF_VAR_aws-access-key: ${{ secrets.AWS_ACCESS_KEY_ID }}
  TF_VAR_aws-secret-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
  TF_VAR_terraform-state-bucket: 's3-terraform-state-files'
  TF_VAR_terraform-state-bucket-namespace: /
  TF_VAR_terraform-state-bucket-key: 'aws-env/terraform.tfstate'
  TF_VAR_terraform-state-bucket-aws-region: 'us-east-1'
  TF_VAR_terraform-dynamodb-state-locking-table-name: 'dynamodb-terraform-state-files'

jobs:
  terraform:
    name: Apply
    runs-on: ubuntu-latest

    steps:
      - name: Checkout
        uses: actions/checkout@v2

      - name: GitHub Auth
        run:  
          git config --global url."https://oauth2:${GITHUB_TOKEN}@github.com/<org-name>".insteadOf "https://github.com/<org-name>" 
        env:  
          GITHUB_TOKEN: ${{ secrets.GITHUB_PAT }}

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v1.2.1
        with:
          terraform_version: ${{ env.TF_VERSION }}

      - name: Terraform Get Update
        run: terraform get -update

      - name: Terraform Init
        run: |
          terraform init \
          -backend-config="dynamodb_table=${{ env.TF_VAR_terraform-dynamodb-state-locking-table-name }}" \
          -backend-config="access_key=${{ secrets.AWS_ACCESS_KEY_ID }}" \
          -backend-config="secret_key=${{ secrets.AWS_SECRET_ACCESS_KEY }}" \
          -backend-config="bucket=${{ env.TF_VAR_terraform-state-bucket }}" \
          -backend-config="key=${{ env.TF_VAR_terraform-state-bucket-key }}" \
          -backend-config="region=${{ env.TF_VAR_terraform-state-bucket-aws-region }}"

      - name: Terraform Validate
        run: terraform validate

      - name: Terraform Plan
        run: terraform plan

      - name: Terraform Apply
        run: terraform apply -auto-approve
Enter fullscreen mode Exit fullscreen mode

Once we have the two GitHub Workflows configured in our repository, all the previously explained magic will happen automatically, exclusively allowing secure and compliant cloud infrastructure to be launched to production.

Say goodbye to fighting developers to adhere to cloud security configurations guidelines and remediating vulnerabilities after they are introduced in your cloud environment. In addition, you won't need to grant excessive privileges to your AWS developers any more, as everything will be deployed from the CI/CD pipeline ;)

Top comments (0)