DEV Community

Cover image for Generative (A)IaC in the IDE with Application Composer

Posted on • Updated on

Generative (A)IaC in the IDE with Application Composer

TL;DR: New tutorial dropped using App Composer + gen AI! Get the tutorial repo or read on to follow along!

AWS Application Composer launched in the AWS Console at re:Invent one year ago, and this re:Invent it expanded to the VS Code IDE as part of the AWS Toolkit - but that’s not the only exciting part. When using App Composer in the IDE, users also get access to a generative AI partner that will help them write infrastructure as code (IaC) for all 1100+ AWS CloudFormation resources that Application Composer now supports.

Better than docs

Application Composer lets users create IaC templates by dragging and dropping cards on a virtual canvas that represent AWS CloudFormation resources, and wiring them together to create permissions and references. Initially launching with support for 13 core serverless resources such as AWS Lambda, Amazon S3, and DynamoDB, in September our team launched support for all 1100+ resources that AWS CloudFormation allows, updated regularly. That means that users can now build with everything from Amplify to XRay, and hundreds of resources in between.

However, while the 13 enhanced resources come with defaults based on best practices, standard AWS CloudFormation resources come only with basic configuration - generally required settings only, with a type set by a schema rather than an example. So a user adding an Amplify App resource would be given the following configuration by default:

    Type: AWS::Amplify::App
      Name: <String>
Enter fullscreen mode Exit fullscreen mode

And they would see this in the console:

Amplify resource in App Composer

That’s well and good for the AWS CloudFormation expert, but someone new to IaC or even just that resource would then have to go into the CloudFormation docs to find additional properties and determine their types, or scour the web for example configurations that may or may not fit their use case. Luckily, the newest App Composer feature saves builders those steps - and uses generative AI to generate resource-specific configurations with safeguards right in the IDE.

When working on a CloudFormation or SAM template in VS Code, users can sign in with their Builder ID and generate multiple suggested configurations in App Composer - here’s an example for our AWS::Amplify::App type:

These suggestions are specific to the resource type, and are safeguarded by a check against the CloudFormation schema to ensure valid values or helpful placeholders. You can then select, use and modify the suggestions to fit your needs.

So we now know how to generate a simple example with one resource, but let’s look at building a full application with the help of AI-generated suggestions.

Building together with AI

You may be aware of Serverlessland, a treasure trove of developer-centered content and examples of serverless applications. I decided to take one of their more popular (and AI-focused) tutorials, titled “Use GenAI capabilities to build a chatbot”, and recreate it with App Composer and our trusty AI assistant. Here we go!

Getting started with the AWS Toolkit in VS Code

First of all, if you don’t yet have the AWS Toolkit extension, you can find it under the Extensions tab in VS Code. Install or update it so you’re at least on version 2.1.0, and you’ll see a screen like this that mentions Amazon Q and Application Composer:

AWS Toolkit in VS Code

Next, to enable our AI assistant, we need to enable CodeWhisperer using our Builder ID. The easiest way is to open Q, and select authenticate - you should be taken to this screen, where you can select the Builder ID option and be taken to the browser to create or sign in with your Builder ID.

Sign in with Builder ID

If all goes well, you’ll see your connection in the VS Code toolkit panel:

Image description

Once you’re connected, you’re ready to start building!

Composing your architecture

In a workspace, create a new folder and a blank file called template.yaml. Open template.yaml, and you should see the Application Composer icon in the top right. Click on it to initialize App Composer with a blank canvas. Now we can start building our application.

Click the App Composer icon

You may have noticed that the tutorial includes an architecture diagram that looks like this:

Original architecture diagram

First we’re going to add the services in the diagram to sketch out our application architecture - with the added bonus of creating a deployable CloudFormation template.

  1. From the Enhanced components list, drag in a Lambda function and a Lambda layer.

  2. Double-click the Function resource to edit its properties. Rename the Lambda function’s Logical ID to LexGenAIBotLambda.
    Change the Source path to src/LexGenAIBotLambda, and the runtime to Python.

  3. Change the handler value to TextGeneration.lambda_handler, and click Save.

  4. Double-click the Layer resource to edit its properties.

  5. Rename the layer Boto3Layer and change its build method to Python. Change its Source path to src/, and click Save.

  6. Finally, connect the layer to the function to add a reference between them.

You can follow along with the video to see this in action:

You’ll see that your template.yaml file has now been updated to include those resources (and some auto-generated ones, such as a Lambda log group for your function). If you go into your source directory, you’ll see some generated function files. Don’t worry about those - we’ll replace them with the tutorial function and layers later.

So that was the easy part - you added some resources and generated IaC that includes best-practices defaults. Next we'll explore using standard CloudFormation components.

With a little help from our (AI) friend

Let’s start by searching for and adding several of the Standard components needed for our application. These include the types AWS::Lambda::Permission, two roles of type AWS::IAM::Role, and one AWS::IAM::Policy. Some standard resources will have all the defaults you need. For example, when you add the AWS::Lambda::Permission resource, all you have to do is replace the placeholder values.

Other resources, such as the IAM Roles and IAM Policy, have bare-bones config. This is where our AI assistant comes in handy! Choose an IAM Role resource and click Generate suggestions.

Generate suggestions

Because these suggestions are generated by an LLM, they will likely differ between each generation, but they are tailored to be specific to each resource and to follow valid CloudFormation schema - so go ahead, generate away!

Generating different configurations will give you an idea of what a resource’s policy should look like, and will often give you keys that you can then fill in with the values you need. Below are the actual settings we’ll be using for each resource, so you can replace the generated values when applicable - be sure to replace each resource’s Logical ID as well!

# CfnLexGenAIDemoRole - type AWS::IAM::Role
    - Action: sts:AssumeRole
      Effect: Allow
  Version: '2012-10-17'
  - !Join
    - ''
    - - 'arn:'
      - !Ref AWS::Partition
      - ':iam::aws:policy/AWSLambdaExecute'

# LexGenAIBotLambdaServiceRole - type AWS::IAM::Role
    - Action: sts:AssumeRole
      Effect: Allow
  Version: '2012-10-17'
  - !Join
    - ''
    - - 'arn:'
      - !Ref AWS::Partition
      - ':iam::aws:policy/service-role/AWSLambdaBasicExecutionRole'

# LexGenAIBotLambdaServiceRoleDefaultPolicy - type AWS::IAM::Policy
    - Action:
        - lex:*
        - logs:*
        - s3:DeleteObject
        - s3:GetObject
        - s3:ListBucket
        - s3:PutObject
      Effect: Allow
      Resource: '*'
    - Action: bedrock:InvokeModel
      Effect: Allow
      Resource: !Join
        - ''
        - - 'arn:aws:bedrock:'
          - !Ref AWS::Region
          - '::foundation-model/anthropic.claude-v2'
  Version: '2012-10-17'
PolicyName: LexGenAIBotLambdaServiceRoleDefaultPolicy
  - !Ref LexGenAIBotLambdaServiceRole

# LexGenAIBotLambdaInvoke - type AWS::Lambda::Permission
Action: lambda:InvokeFunction
FunctionName: !GetAtt LexGenAIBotLambda.Arn
Enter fullscreen mode Exit fullscreen mode

Finally, let’s add our Lex bot! In the resource picker, search for and add type AWS::Lex::Bot. Here’s another chance to see what configuration the AI comes up with!

Change the Lex bot’s Logical ID to LexGenAIBot and update its configuration to the following:

  ChildDirected: false
IdleSessionTTLInSeconds: 300
Name: LexGenAIBot
RoleArn: !GetAtt CfnLexGenAIDemoRole.Arn
AutoBuildBotLocales: true
  - Intents:
      - InitialResponseSetting:
            EnableCodeHookInvocation: true
            IsActive: true
            PostCodeHookSpecification: {}
              - Message:
                    Value: Hi there, I'm a GenAI Bot. How can I help you?
        Name: WelcomeIntent
          - Utterance: Hi
          - Utterance: Hey there
          - Utterance: Hello
          - Utterance: I need some help
          - Utterance: Help needed
          - Utterance: Can I get some help?
      - FulfillmentCodeHook:
          Enabled: true
          IsActive: true
          PostFulfillmentStatusSpecification: {}
            EnableCodeHookInvocation: true
            IsActive: true
            PostCodeHookSpecification: {}
        Name: GenerateTextIntent
          - Utterance: Generate content for
          - Utterance: 'Create text '
          - Utterance: 'Create a response for '
          - Utterance: Text to be generated for
      - FulfillmentCodeHook:
          Enabled: true
          IsActive: true
          PostFulfillmentStatusSpecification: {}
            EnableCodeHookInvocation: true
            IsActive: true
            PostCodeHookSpecification: {}
        Name: FallbackIntent
        ParentIntentSignature: AMAZON.FallbackIntent
    LocaleId: en_US
    NluConfidenceThreshold: 0.4
Description: Bot created demonstration of GenAI capabilities.
    - BotAliasLocaleSetting:
            CodeHookInterfaceVersion: '1.0'
            LambdaArn: !GetAtt LexGenAIBotLambda.Arn
        Enabled: true
      LocaleId: en_US
Enter fullscreen mode Exit fullscreen mode

Once all of your resources are configured, your application should look like this:

Final architecture in App Composer

In case your template looks different or you want to double-check your configuration, you can copy the template directly from my Github repository. You’ll want to copy the Lambda layer directly from the repository, and add it to ./src/ Finally, rename the generated to and replace the placeholder code with the following:

import json
import boto3
import os
import logging
from botocore.exceptions import ClientError

LOG = logging.getLogger()

region_name = os.getenv("region", "us-east-1")
s3_bucket = os.getenv("bucket")
model_id = os.getenv("model_id", "anthropic.claude-v2")

# Bedrock client used to interact with APIs around models
bedrock = boto3.client(service_name="bedrock", region_name=region_name)

# Bedrock Runtime client used to invoke and question the models
bedrock_runtime = boto3.client(service_name="bedrock-runtime", region_name=region_name)

def get_session_attributes(intent_request):
    session_state = intent_request["sessionState"]
    if "sessionAttributes" in session_state:
        return session_state["sessionAttributes"]

    return {}

def close(intent_request, session_attributes, fulfillment_state, message):
    intent_request["sessionState"]["intent"]["state"] = fulfillment_state
    return {
        "sessionState": {
            "sessionAttributes": session_attributes,
            "dialogAction": {"type": "Close"},
            "intent": intent_request["sessionState"]["intent"],
        "messages": [message],
        "sessionId": intent_request["sessionId"],
        "requestAttributes": intent_request["requestAttributes"]
        if "requestAttributes" in intent_request
        else None,

def lambda_handler(event, context):"Event is {event}")
    accept = "application/json"
    content_type = "application/json"
    prompt = event["inputTranscript"]

        request = json.dumps(
                "prompt": "\n\nHuman:" + prompt + "\n\nAssistant:",
                "max_tokens_to_sample": 4096,
                "temperature": 0.5,
                "top_k": 250,
                "top_p": 1,
                "stop_sequences": ["\\n\\nHuman:"],

        response = bedrock_runtime.invoke_model(

        response_body = json.loads(response.get("body").read())"Response body: {response_body}")
        response_message = {
            "contentType": "PlainText",
            "content": response_body["completion"],
        session_attributes = get_session_attributes(event)
        fulfillment_state = "Fulfilled"

        return close(event, session_attributes, fulfillment_state, response_message)

    except ClientError as e:
        LOG.error(f"Exception raised while execution and the error is {e}")
Enter fullscreen mode Exit fullscreen mode

To deploy your infrastructure, go back to the App Composer extension, and click the Sync icon. You'll need to have local IAM credentials and the AWS SAM CLI installed, so grab it if you don't already have it. Follow the guided AWS SAM instructions to initiate and complete your deployment.

Click Sync to deploy

If all goes well, you’ll see the message SAM Sync succeeded and can navigate to CloudFormation in the AWS Console to see your newly-created resources.

If you want to continue with building your chatbot, be sure to follow the rest of the original tutorial to keep building.

Go build - with help!

I hope that building with AI-generated CloudFormation will help increase your understanding of the different resource settings that are available and commonly used, and accelerate your time from idea to deployed application. As with everything using generative AI today, be sure to read the AWS Responsible AI Policy before applying these examples yourself. Happy building!

A version of this post was originally published over at the AWS Compute Blog - give them a follow to keep up with all things AWS!

Top comments (1)

shrihariharidass profile image
Shrihari Haridass

Awesome thanks for this update