DEV Community

Cover image for How to use apply guardrail to protect PII information with AWS Bedrock Converse API, Lambda and Python! - Anthropic Haiku
Girish Bhatia
Girish Bhatia

Posted on

How to use apply guardrail to protect PII information with AWS Bedrock Converse API, Lambda and Python! - Anthropic Haiku

In this article, I am going to demonstrate a revision of previously published workshop on how to build a serverless GenAI solution to create call center transcript summary via a Rest API, Lambda and AWS Bedrock Converse API and protect the sensitive information like PII using guardrail policy.

On July 10, 2024, during the AWS New York Summit, AWS announced the introduction of the Apply Guardrail feature for its Generative AI services. Amazon Bedrock, already a standout service in the AWS lineup, gains further flexibility with this new feature, allowing developers to decouple the guardrail from Large Language Model (LLM) invocation using Bedrock.

In June 2024, AWS added support for guardrails via the Converse API. However, a challenge I encountered was that guardrail policies were being applied to both input and response. With additional code for guard content, I was able to accomplish the desired result however it was additional lines of code to meet the use case requirements.

Here's a link to my previous article on how to apply guardrails using the Converse API to protect Personally Identifiable Information (PII).

The newly announced Apply Guardrail function empowers developers with more control over how best to implement guardrails when invoking the Bedrock API. This feature ensures that based on the guardrail policy, if an input is blocked, developers can return a response before Bedrock is even invoked. This approach not only enhances security but also optimizes efficiency by saving unnecessary calls to the foundational model.

This enhancement marks a significant step forward in the customization and control developers have over their AI applications, ensuring safer and more efficient interactions with generative AI models.

In this article, I have updated my previous published article and code to demonstrate how the Apply Guardrail feature can be used to protect PII information from a transcript summary generated using Amazon Bedrock, Lambda, and API.

Examples of PII (Personal Identifiable Information) - SSN, Account Number, Phone, Email, Address etc.

Let's revisit the Guardrail Policies supported by AWS

Guardrail Policies
The Amazon Bedrock Guardrail feature allows you to configure various filters, providing responsible boundaries for the responses generated by your AI solution. These guardrails help ensure that the outputs are appropriate and align with your requirements and standards.

Content Filters
Content Filters across 6 categories

  • Hate
  • Insults
  • Sexual
  • Violence
  • Misconduct
  • Prompt Attack

Filters can be set to None, Low, Medium, High.

Image description

Denied Topics
You can specify filter for the topic that API should not respond to!

Word Filter
You can specify words that you want filter to act on before providing a response!

Sensitive Information Filter
Filter to either block or mask the Personal Identifiable Information.

Amazon Bedrock allow provides a way to configure the message provided back to the user if input or the response in violation with the guardrail configured policies. For example, if Sensitive information filter configured to block a request with account number, then, you can provide a customize response letting user know that request cannot be processed since it contains a forbidden data element.

Let's review our use cases:

• There is a transcript available for a case resolution and conversation between customer and support/call center team member.
• A call summary needs to be created based on this resolution/conversation transcript.
• An automated solution is required to create call summary.
• An automated solution will provide a repeatable way to create these call summary notes.
• Increase in productivity as team members usually work on documenting these notes can focus on other tasks.
• Guardrail should be configured so that PII information is not displayed in the response.
• Guardrail will also be applied to input. If input contains blocked data element like account number, then, API will not invoke the LLM and return the initial response to the consumer.

I am generating my lambda function using AWS SAM, however similar can be created using AWS Console. I like to use AWS SAM wherever possible as it provides me the flexibility to test the function without first deploying to AWS cloud.

Here is the architecture diagram for our use case.

Image architecture

Create a SAM Template

I will create a SAM template for the lambda function that will contain the code to invoke Bedrock Converse API along with required parameters and a prompt. Lambda function can be created without the SAM template however, I prefer to use Infra as Code approach since that allow for easy recreation of cloud resources. Here is the SAM template for the lambda function.

Image SAM

Create a Lambda Function

The Lambda function serves as the core of this automated solution. It contains the code necessary to fulfill the business requirement of creating a summary of the call center transcript using the Amazon Bedrock Converse API. This Lambda function accepts a prompt, which is then forwarded to the Bedrock Converse API to generate a response using the Anthropic Haiku foundation model. Now, Let's look at the code behind it.

Example of apply guardrail in the function:

Image GD1

Image GD2

Image Lambda

Build function locally using AWS SAM

Next build and validate function using AWS SAM before deploying the lambda function in AWS cloud. Few SAM commands used are:
• SAM Build
• SAM local invoke
• SAM deploy

Bedrock Invoke Model Vs. Bedrock Converse API

Bedrock InvokeModel

Image Bedrock

Bedrock Converse API

Image Converse

Validate the GenAI Model response using a prompt

Prompt engineering is an essential component of any Generative AI solution. It is both art and science, as crafting an effective prompt is crucial for obtaining the desired response from the foundation model. Often, it requires multiple attempts and adjustments to the prompt to achieve the desired outcome from the Generative AI model.

Given that I'm deploying the solution to AWS API Gateway, I'll have an API endpoint post-deployment. I plan to utilize Postman for passing the prompt in the request and reviewing the response. Additionally, I can opt to post the response to an AWS S3 bucket for later review.

Image Postman

I am using Postman to pass transcript file for the prompt.

This transcript file has a conversation between call center employee (John) and customer (Girish) about a request to reset the password due to the locked account.

• John: Hello, thank you for calling technical support. My name is John and I will be your technical support representative. Can I have your account number, please?
• Girish: Yes, my account number is 21X-45X-8790.
• John: Thank you. I see that you have locked your account due to multiple failed attempts to enter your password. To reset your password, I will need to ask you a few security questions. Can you please provide me with the answers to your security questions?
• Girish: Sure, my security questions are: What is your favorite color? and What is your favorite food?
• John: Please can you provide your zip code?
• Girish: Yes, my zip code is 43215.
• John: one final question, Please confirm your email address.
• Girish: my email is gbtest@gmailtest.com.
• John: Great, thank you. I will now reset your password and send you an email with instructions on how to log in to your account. Please check your email in a few minutes.
• Girish: Thank you so much for your help.
• John: You're welcome. Is there anything else I can assist you with today?
• Girish: No, that's all for now. Thank you again for your help.
• John: You're welcome. Have a great day!

*Review the guarded/masked response returned by Generative AI Foundation Model
*

Image Response1

As you can note in the response above, GenAI response has masked the PII information.

Let's look at the response once guardrail policy is updated to block the PII data.

Response with blocked data
Here is the response when policy is updated to block if PII contains account number.

Image Response2

Input with blocked data

Image Response3

With these steps, a serverless GenAI solution to create call center transcript summary via a Rest API, Lambda and AWS Bedrock Converse API has been successfully completed. Amazon Bedrock Guardrail has been configured to protect PII information. Apply guardrail was applied to demonstrate how both input and response data can be protected using the guardrail. Python/Boto3 were used to invoke the Bedrock Converse API with Anthropic Haiku.

As was demonstrated, with Converse API, guardrail was used to implement a policy to control the GenAI response and abstract and block the PII data!

A guardrail was created to remove the PII information from the response. Also, guardrail config was updated to validate that account number when configured for blocking, will be blocked.

Thanks for reading!

Click here to get to YouTube video for this solution.

https://www.youtube.com/watch?v=UNEtKudYvA4

𝒢𝒾𝓇𝒾𝓈𝒽 ℬ𝒽𝒶𝓉𝒾𝒶
𝘈𝘞𝘚 𝘊𝘦𝘳𝘵𝘪𝘧𝘪𝘦𝘥 𝘚𝘰𝘭𝘶𝘵𝘪𝘰𝘯 𝘈𝘳𝘤𝘩𝘪𝘵𝘦𝘤𝘵 & 𝘋𝘦𝘷𝘦𝘭𝘰𝘱𝘦𝘳 𝘈𝘴𝘴𝘰𝘤𝘪𝘢𝘵𝘦
𝘊𝘭𝘰𝘶𝘥 𝘛𝘦𝘤𝘩𝘯𝘰𝘭𝘰𝘨𝘺 𝘌𝘯𝘵𝘩𝘶𝘴𝘪𝘢𝘴𝘵

Top comments (0)