DEV Community

DevOps4Me Global
DevOps4Me Global

Posted on

Deploy AWS Resources in Different AWS Account and Multi-Region with Terraform Multi-Provider and Alias

Introduction

As a company with a multi-account AWS setup, we recently faced the challenge of applying our terraform scripts across two AWS accounts, with some resources being created in one account and others in another. Fortunately, we discovered that Terraform provides a simple solution for this problem through the use of provider aliases.

By creating aliases, we were able to have multiple AWS providers within the same terraform module. This functionality can be used in a variety of situations, such as creating resources in different regions of the same AWS account or in different regions of different AWS accounts.

In this article, we will explain how to use provider aliases to create resources in single and multiple AWS accounts. By doing so, we hope to help others who may be facing similar challenges in their own multi-account AWS setups.

What is Terraform Provider Alias

In some cases, it may be necessary to define multiple configurations for the same provider and choose which one to use on a per-resource or per-module basis. This is often required when working with cloud platforms that have multiple regions, but can also be used in other situations, such as targeting multiple Docker or Consul hosts.

To create multiple configurations for a provider, simply include multiple provider blocks with the same provider name. For each additional configuration, use the "alias" meta-argument to provide a unique name segment. By doing so, you can easily select the appropriate configuration for each resource or module, making it easier to manage complex infrastructure setups. For Example;

# The default provider configuration; resources that begin with `aws_` will use
# it as the default, and it can be referenced as `aws`.
provider "aws" {
  region = "ap-southeast-1"
}

# Additional provider configuration for west coast region; resources can
# reference this as `aws.uswest1`.
provider "aws" {
  alias  = "uswest1"
  region = "us-east-1"
}
Enter fullscreen mode Exit fullscreen mode

Terraform Provider Alias Use-Cases

  1. Multiple AWS Account
  2. Multiple Region with same AWS Account

Prequisites

As for this article, I have 2 AWS account and I configured both AWS account profile like below:

do4m-main
1st AWS MAIN Account Configure

do4m-dev

1st AWS DEV Account Configure

We will used "Profile" as we going to do the 1st use-case above.

Configuring Multiple AWS providers

If you need to create resources in multiple AWS accounts using Terraform, you may run into an issue where you can't write two or more providers with the same name, such as two AWS providers. However, Terraform provides a solution to this problem by allowing you to use the "alias" argument.

With the alias argument, you can set up multiple AWS providers and use them for creating resources in both AWS accounts. This approach enables you to differentiate between the providers and avoid conflicts that could arise from having two providers with the same name. By using this technique, you can effectively manage your infrastructure and ensure that your resources are deployed to the correct AWS accounts.

I've set my provider.tf file like below;

  1. I used do4m-main profile and set my alias as awsmain which I want to deploy to
  2. I used do4m-dev profile and set my alias as awsdev

Provider file

Defining two providers for two different AWS accounts is a great start, but it's important to know how Terraform will know which resource to create in which AWS account. To achieve this, we need to know how to refer to these different providers when defining our resources in Terraform templates.

To differentiate between providers, we need to use the "alias" argument that we defined earlier. By specifying the alias name in our resource block, we can instruct Terraform to create that resource using the provider with the corresponding alias.

For example, if we have two providers named "aws" with aliases "awsmain" and "awsdev", respectively, we can create an EC2 instances in the Main and DEV account by using the following resource block:

resource "aws_instance" "ec2_main" {
  provider        = aws.awsmain
  ami             = var.ami_apsoutheast
  instance_type   = "t2.micro"
  key_name        = "linux-sea-key"
  security_groups = ["ansible-sg"]
  tags = {
    Name    = "account-main",
    Project = "multiprovider",
    Region  = "ap-southeast-1"
  }
}

resource "aws_instance" "ec2_dev" {
  provider        = aws.awsdev
  ami             = var.ami_useast
  instance_type   = "t2.micro"
  key_name        = "linux-useast-key"
  security_groups = ["ansible-sg"]
  tags = {
    Name    = "account-dev",
    Project = "multiprovider",
    Region  = "ap-southeast-1"
  }
}
Enter fullscreen mode Exit fullscreen mode

I used a Linux AMI for this example and since we defined our AMI ID with variables, so we create another file called variables.tf and fill all the AMI ID respectively in Singapore and US East region:

variable "ami_apsoutheast" {
  type    = string
  default = "ami-0af2f764c580cc1f9"
}

variable "ami_useast" {
  type    = string
  default = "ami-00c39f71452c08778"
}
Enter fullscreen mode Exit fullscreen mode

Let's get started!: Terraform Apply

Now we already set up all providers we want, we can run all the Terraform commands below:

terraform init
Enter fullscreen mode Exit fullscreen mode

terraform init output

terraform plan

➜  terraform-multi-providers terraform plan   

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_instance.ec2_dev will be created
  + resource "aws_instance" "ec2_dev" {
      + ami                                  = "ami-0af2f764c580cc1f9"
      + arn                                  = (known after apply)
      + associate_public_ip_address          = (known after apply)
      + availability_zone                    = (known after apply)
      + cpu_core_count                       = (known after apply)
      + cpu_threads_per_core                 = (known after apply)
      + disable_api_stop                     = (known after apply)
      + disable_api_termination              = (known after apply)
      + ebs_optimized                        = (known after apply)
      + get_password_data                    = false
      + host_id                              = (known after apply)
      + host_resource_group_arn              = (known after apply)
      + iam_instance_profile                 = (known after apply)
      + id                                   = (known after apply)
      + instance_initiated_shutdown_behavior = (known after apply)
      + instance_state                       = (known after apply)
      + instance_type                        = "t2.micro"
      + ipv6_address_count                   = (known after apply)
      + ipv6_addresses                       = (known after apply)
      + key_name                             = "linux-sea-key"
      + monitoring                           = (known after apply)
      + outpost_arn                          = (known after apply)
      + password_data                        = (known after apply)
      + placement_group                      = (known after apply)
      + placement_partition_number           = (known after apply)
      + primary_network_interface_id         = (known after apply)
      + private_dns                          = (known after apply)
      + private_ip                           = (known after apply)
      + public_dns                           = (known after apply)
      + public_ip                            = (known after apply)
      + secondary_private_ips                = (known after apply)
      + security_groups                      = [
          + "ansible-sg",
        ]
      + source_dest_check                    = true
      + subnet_id                            = (known after apply)
      + tags                                 = {
          + "Name"    = "account-dev"
          + "Project" = "multiprovider"
          + "Region"  = "ap-southeast-1"
        }
      + tags_all                             = {
          + "Name"    = "account-dev"
          + "Project" = "multiprovider"
          + "Region"  = "ap-southeast-1"
        }
      + tenancy                              = (known after apply)
      + user_data                            = (known after apply)
      + user_data_base64                     = (known after apply)
      + user_data_replace_on_change          = false
      + vpc_security_group_ids               = (known after apply)

      + capacity_reservation_specification {
          + capacity_reservation_preference = (known after apply)

          + capacity_reservation_target {
              + capacity_reservation_id                 = (known after apply)
              + capacity_reservation_resource_group_arn = (known after apply)
            }
        }

      + ebs_block_device {
          + delete_on_termination = (known after apply)
          + device_name           = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = (known after apply)
          + snapshot_id           = (known after apply)
          + tags                  = (known after apply)
          + throughput            = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }

      + enclave_options {
          + enabled = (known after apply)
        }

      + ephemeral_block_device {
          + device_name  = (known after apply)
          + no_device    = (known after apply)
          + virtual_name = (known after apply)
        }

      + maintenance_options {
          + auto_recovery = (known after apply)
        }

      + metadata_options {
          + http_endpoint               = (known after apply)
          + http_put_response_hop_limit = (known after apply)
          + http_tokens                 = (known after apply)
          + instance_metadata_tags      = (known after apply)
        }

      + network_interface {
          + delete_on_termination = (known after apply)
          + device_index          = (known after apply)
          + network_card_index    = (known after apply)
          + network_interface_id  = (known after apply)
        }

      + private_dns_name_options {
          + enable_resource_name_dns_a_record    = (known after apply)
          + enable_resource_name_dns_aaaa_record = (known after apply)
          + hostname_type                        = (known after apply)
        }

      + root_block_device {
          + delete_on_termination = (known after apply)
          + device_name           = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = (known after apply)
          + tags                  = (known after apply)
          + throughput            = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }
    }

  # aws_instance.ec2_main will be created
  + resource "aws_instance" "ec2_main" {
      + ami                                  = "ami-0af2f764c580cc1f9"
      + arn                                  = (known after apply)
      + associate_public_ip_address          = (known after apply)
      + availability_zone                    = (known after apply)
      + cpu_core_count                       = (known after apply)
      + cpu_threads_per_core                 = (known after apply)
      + disable_api_stop                     = (known after apply)
      + disable_api_termination              = (known after apply)
      + ebs_optimized                        = (known after apply)
      + get_password_data                    = false
      + host_id                              = (known after apply)
      + host_resource_group_arn              = (known after apply)
      + iam_instance_profile                 = (known after apply)
      + id                                   = (known after apply)
      + instance_initiated_shutdown_behavior = (known after apply)
      + instance_state                       = (known after apply)
      + instance_type                        = "t2.micro"
      + ipv6_address_count                   = (known after apply)
      + ipv6_addresses                       = (known after apply)
      + key_name                             = "linux-sea-key"
      + monitoring                           = (known after apply)
      + outpost_arn                          = (known after apply)
      + password_data                        = (known after apply)
      + placement_group                      = (known after apply)
      + placement_partition_number           = (known after apply)
      + primary_network_interface_id         = (known after apply)
      + private_dns                          = (known after apply)
      + private_ip                           = (known after apply)
      + public_dns                           = (known after apply)
      + public_ip                            = (known after apply)
      + secondary_private_ips                = (known after apply)
      + security_groups                      = [
          + "ansible-sg",
        ]
      + source_dest_check                    = true
      + subnet_id                            = (known after apply)
      + tags                                 = {
          + "Name"    = "account-main"
          + "Project" = "multiprovider"
          + "Region"  = "ap-southeast-1"
        }
      + tags_all                             = {
          + "Name"    = "account-main"
          + "Project" = "multiprovider"
          + "Region"  = "ap-southeast-1"
        }
      + tenancy                              = (known after apply)
      + user_data                            = (known after apply)
      + user_data_base64                     = (known after apply)
      + user_data_replace_on_change          = false
      + vpc_security_group_ids               = (known after apply)

      + capacity_reservation_specification {
          + capacity_reservation_preference = (known after apply)

          + capacity_reservation_target {
              + capacity_reservation_id                 = (known after apply)
              + capacity_reservation_resource_group_arn = (known after apply)
            }
        }

      + ebs_block_device {
          + delete_on_termination = (known after apply)
          + device_name           = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = (known after apply)
          + snapshot_id           = (known after apply)
          + tags                  = (known after apply)
          + throughput            = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }

      + enclave_options {
          + enabled = (known after apply)
        }

      + ephemeral_block_device {
          + device_name  = (known after apply)
          + no_device    = (known after apply)
          + virtual_name = (known after apply)
        }

      + maintenance_options {
          + auto_recovery = (known after apply)
        }

      + metadata_options {
          + http_endpoint               = (known after apply)
          + http_put_response_hop_limit = (known after apply)
          + http_tokens                 = (known after apply)
          + instance_metadata_tags      = (known after apply)
        }

      + network_interface {
          + delete_on_termination = (known after apply)
          + device_index          = (known after apply)
          + network_card_index    = (known after apply)
          + network_interface_id  = (known after apply)
        }

      + private_dns_name_options {
          + enable_resource_name_dns_a_record    = (known after apply)
          + enable_resource_name_dns_aaaa_record = (known after apply)
          + hostname_type                        = (known after apply)
        }

      + root_block_device {
          + delete_on_termination = (known after apply)
          + device_name           = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = (known after apply)
          + tags                  = (known after apply)
          + throughput            = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }
    }

Plan: 2 to add, 0 to change, 0 to destroy.
Enter fullscreen mode Exit fullscreen mode
terraform apply

Enter fullscreen mode Exit fullscreen mode

As the result/outcome from terraform apply, we have both EC2 instance deployed in both AWS account we set earlier:

AWS-MAIN(Singapore):
MAIN account EC2 Instance

AWS-DEV(US East-Virginia)

DEV account EC2 Instance

Deploy EC2 Instance in multi-region with same AWS Account

Now we proceed with our 2ns use-case, we want to use Terraform Provider Alias to deploy 2 EC2 intances into different AWS Region with my MAIN AWS account. For this use-case I have the following scenario:

  1. I want to deploy the 1st EC2 to Singapore region (ap-southeast-1)
  2. I want to deploy the 2nd EC2 to Sydney region (ap-southeast-2)

Create new folder called multi-region in current code workspace:

mkdir multi-region
cd multi-region
Enter fullscreen mode Exit fullscreen mode

Next, we create our provider.tf with alias like below:

provider "aws" {
  alias  = "se1"
  region = "ap-southeast-1"
}

provider "aws" {
  alias  = "se2"
  region = "ap-southeast-2"
}
Enter fullscreen mode Exit fullscreen mode

Now we have 2 terraform providers:

  1. I've created 1st terraform alias se1 and region is Singapore (ap-southeast-1)
  2. Another terraform alias is se2 and region is Sydney ap-southeast-2

We can re-use/clone the earlier main.tf file we created in 1st use-case and since already know AMI ID I want to used;you can modified like below:

resource "aws_instance" "ec2_main" {
  provider        = aws.se1
  ami             = "ami-0af2f764c580cc1f9"
  instance_type   = "t2.micro"
  key_name        = "linux-sea-key"
  security_groups = ["ansible-sg"]
  tags = {
    Name    = "account-se1",
    Project = "multiprovider",
    Region  = "ap-southeast-1"
  }
}

resource "aws_instance" "ec2_dev" {
  provider        = aws.se2
  ami             = "ami-0d0175e9dbb94e0d2"
  instance_type   = "t2.micro"
  key_name        = "linux-sea-key"
  security_groups = ["ansible-sg"]
  tags = {
    Name    = "account-se2",
    Project = "multiprovider",
    Region  = "ap-southeast-2"
  }
}
Enter fullscreen mode Exit fullscreen mode

Lastly, we can run the Terraform commands below to see the output of above setup:

terraform init
Enter fullscreen mode Exit fullscreen mode
terraform plan
Enter fullscreen mode Exit fullscreen mode
terraform plan
Enter fullscreen mode Exit fullscreen mode

After all terraform commands executed, we get both EC2 instances in both regions below:

Singapore Region
AWS-SE1

Sydney Region
AWS-SE2

Conclusion

As we strive to scale our application and infrastructure, it's important to adopt an effective approach when working with Terraform. Relying on a single state file can lead to potential bottlenecks and even a point of failure, especially when managing a large infrastructure team. However, we can overcome these challenges by using multiple AWS providers in different combinations.

This approach enables us to manage numerous Terraform state files and carry out multi-region deployment. The same principles can also apply when using different types of providers in the same Terraform module, such as AWS and GCP. By embracing this strategy, we can streamline our infrastructure management processes and optimize our workflow. This ensures that our application and infrastructure are fully scalable, flexible, and resilient, regardless of the size of our infrastructure team or the types of providers we use.

Top comments (0)