DEV Community

Allan Chua for AWS Community Builders

Posted on • Edited on

How to Import Existing Resources in your CloudFormation Stacks

Among challenges that developers and operations staff face while working on CloudFormation is the inability to easily import orphaned stateful resources (S3, DynamoDB, Aurora, OpenSearch, Elastic Search) into CloudFormation stacks.

This has been a known problem that has been plaguing developers for quite some time. Orphaned CloudFormation resources are often generated through either:

  • Refactoring of a CloudFormation stack that exceeded the maximum number of CloudFormation resources (500 at the time of this article's writing).
  • The need to segregate stateful (DynamoDB, S3, SQS, MemoryDB, RDS, etc) vs disposable resources (Lambda, API Gateway, StepFunctions) between multiple CloudFormation stacks.
  • The need to convert a POC project that contained both stateful and disposable compute resources in a single CloudFormation stack to comply with production compliance rules of an organization.
  • To regroup stateful resources into multiple independent/nested stacks
  • To improve deployment efficiency and reduce risk of accidental deletion of stateful resources.
  • The need to recover from accidental deletion of CloudFormation stacks that contained stateful resources (DynamoDB, SQS, S3 Buckets, RDS, etc) tagged with DeletionPolicy: Retain

Thankfully, AWS just published a small yet powerful update in CloudFormation ChangeSet API that enables the passing of the ImportExistingResources parameter.

As per AWS Blog,

When you deploy ChangeSets with the ImportExistingResources parameter, CloudFormation automatically imports the resources in your template that already exist in your AWS account. CloudFormation uses the custom names of resources in your template to determine their existence. With this launch, you can reduce the manual effort of import operations and avoid deployment failures because of naming conflicts.

A hands-on example:

Consider the CloudFormation template below that contains both a DynamoDB table and an S3 bucket. If you wanna test it out on your own, you may also view this Github repository to play around it.



AWSTemplateFormatVersion: 2010-09-09

Resources:
  LionDDBTable:
    Type: AWS::DynamoDB::Table
    DeletionPolicy: Retain
    Properties:
      TableName: "my-lion-ddb"
      AttributeDefinitions:
        - AttributeName: ID
          AttributeType: S
      KeySchema:
        - AttributeName: ID
          KeyType: HASH
      ProvisionedThroughput:
        ReadCapacityUnits: 5
        WriteCapacityUnits: 5

  LionS3Bucket:
    Type: AWS::S3::Bucket
    DeletionPolicy: Retain
    Properties:
      # Replace this bucket name as S3 
      # bucket names are expected to be 
      # globally unique
      BucketName: "acsg-test-lion-bucket"


Enter fullscreen mode Exit fullscreen mode

If the CloudFormation stack defined above gets deleted, both the S3 bucket and the DynamoDB tables will be retained thanks to the DeletionPolicy configured with Retain strategy.

This behaviour is great if the only goal for operating these resources is to make sure that data does not get dropped maliciously. One of the downsides of deploying this strategy is that we couldn't easily import back these resources in new CloudFormation stacks.

To prove this, run the following CloudFormation stack creation script:



#!/bin/bash
aws cloudformation create-stack \
    --stack-name "petshop-stateful-stack" \
    --template-body file:///$PWD/template.cfn.yaml \
    --capabilities CAPABILITY_IAM \


Enter fullscreen mode Exit fullscreen mode

After the deployment is done, you can run the following stack destruction script:



#!/bin/bash
aws cloudformation delete-stack \
    --stack-name  "petshop-stateful-stack"


Enter fullscreen mode Exit fullscreen mode

Since both our S3 and DynamoDB tables are configured with DeletionPolicy: Retain settings, you can expect to still see them in their respective consoles despite the parent CloudFormation stack's deletion.

If you re-run the aws cloudformation create-stack CLI call from the previous step, you'll get the following error from CloudFormation:



Resource handler returned message: 
"Resource of type 'AWS::DynamoDB::Table' 
with identifier 'my-lion-ddb' already exists." 
(RequestToken: 016e3290-588c-09f9-a92e-497ae81f9e49, HandlerErrorCode: AlreadyExists)


Enter fullscreen mode Exit fullscreen mode

IMHO, it will be a huge development boost if we can also get the --import-existing-resources flag for the aws cloudformation create-stack CLI invocation.

Using ChangeSets and ImportExistingResources Parameter

To automatically recreate the stack and import the pre-existing resources that were orphaned by the destroy step, we can create a create-change-set API call via AWS CLI and pass the --import-existing-resources parameter.



#!/bin/bash

# Define variables
stack_name="petshop-stateful-stack"
template_file="$PWD/template.cfn.yaml"
change_set_name="change-set-$(date +%s)"

# Create a change set
aws cloudformation create-change-set \
    --stack-name "$stack_name" \
    --change-set-type "CREATE" \
    --change-set-name "$change_set_name" \
    --template-body "file://$template_file" \
    --capabilities CAPABILITY_IAM \
    --import-existing-resources # Life saving parameter

# Describe the change set (optional)
aws cloudformation describe-change-set \
    --stack-name "$stack_name" \
    --change-set-name "$change_set_name"

# Execute the change set
echo "Do you want to execute this change set? (yes/no)"
read execute_decision

if [[ "$execute_decision" == "yes" ]]; then
    aws cloudformation execute-change-set \
        --stack-name "$stack_name" \
        --change-set-name "$change_set_name"
    echo "Change set executed."
else
    echo "Change set not executed."
fi


Enter fullscreen mode Exit fullscreen mode

Upon the execution of the changeset, you'll be seeing "Import Complete" operations in your CloudFormation stacks.

CF Import Events

Things to keep in mind

  • You'll need the latest AWS CLI version to use this feature. I'm using AWS CLI version 2.13.38 at the time of this article's writing.
  • The stateful resources needs to have a DeletionPolicy property configured in it and have a unique name defined on the resource.
  • You can't import resources into multiple CloudFormation stacks.
  • CloudFormation max resource count limitation (500 at the time of writing) applies to import operations.
  • You can use the cloudformation:ImportResourceTypes policy condition to define IAM policies that control which resource types principals can run import operations on.

Benefits

  • We now have the option to delete CloudFormation stacks that contains stateful resources as long as they are properly configured with DeletionPolicy: Retain property
  • It is now possible to easily refactor, regroup and transfer resources across new and existing CloudFormation stacks on-demand.
  • Less ClickOps related to importing existing resources to CloudFormation stacks.

Related Links:

Top comments (0)