DEV Community

Cover image for Building an Amazon Bedrock App for Text and Image Retrieval
Rashwan Lazkani for AWS Community Builders

Posted on

Building an Amazon Bedrock App for Text and Image Retrieval

Introducing Amazon Bedrock, a fully managed service, that provides a range of top-performing foundation models (FMs) from leading AI companies. It empowers you to effortlessly experiment with these FMs, customize them with your data through techniques like fine-tuning and retrieval augmented generation (RAG), and build managed agents for executing intricate business tasks and private projects.

Source Amazon.

As a fully managed service, it eliminates the need for server management or similar tasks. This means you won't have to worry about server provisioning, scaling, or maintenance.

In the context of serverless computing, services like AWS Lambda enable you to run code without managing servers. When you trigger a Lambda function, AWS automatically handles the infrastructure, scaling, and execution of your code. It's a pay-as-you-go model where you're only charged for the compute time your code actually uses, making it a cost-efficient and hassle-free option for running code in the cloud.

Tools that we'll be using:

  • Amazon Bedrock
  • AWS Lambda
  • Serverless Framework
  • TypeScript

Today, we'll be building two Lambda functions for text and image retrieval from Amazon Bedrock. We'll deploy these using Infrastructure as Code, make them accessible via API Gateway, and ensure everything remains Serverless.

You can use these functions to create applications similar to the ones I've showcased below:

Bedrock with Text
Bedrock with Text

Bedrock with Images
Bedrock with Images

Let's start!

Accessing BedRock begins with the following steps:

  1. Login to your AWS Console
  2. Go to Amazon Bedrock
  3. In the left navigation click on Model access
  4. Request access

Let's proceed to develop our Lambda functions:

You can choose which tool you want to use for deploying your Lambda functions by yourself but I'll provide the code for creating the Lambdas:

Text Lambda:

Please take note of a few important points below:

  1. You need to import the client-bedrock-runtime package
  2. You need to add the modelId
  3. The prompt is the search text provided from your API
import { BedrockRuntimeClient, InvokeModelCommand } from '@aws-sdk/client-bedrock-runtime';

const client = new BedrockRuntimeClient({ region: 'us-east-1' });

export async function handler(event: any) {
  const prompt = JSON.parse(event.body).prompt;
  const input = {
    modelId: 'ai21.j2-mid-v1',
    contentType: 'application/json',
    accept: '*/*',
    headers: {
      'Access-Control-Allow-Headers': 'Content-Type',
      'Access-Control-Allow-Origin': '*',
      'Access-Control-Allow-Credentials': true,
      'Access-Control-Allow-Methods': 'POST'
    },
    body: JSON.stringify({
      prompt: prompt,
      maxTokens: 200,
      temperature: 0.7,
      topP: 1,
      stopSequences: [],
      countPenalty: { scale: 0 },
      presencePenalty: { scale: 0 },
      frequencyPenalty: { scale: 0 }
    })
  };

  try {
    const data = await client.send(new InvokeModelCommand(input));
    const jsonString = Buffer.from(data.body).toString('utf8');
    const parsedData = JSON.parse(jsonString);
    const text = parsedData.completions[0].data.text;
    return text;
  } catch (error) {
    console.error(error);
  }
}
Enter fullscreen mode Exit fullscreen mode

Image Lambda

import { BedrockRuntimeClient, InvokeModelCommand } from '@aws-sdk/client-bedrock-runtime';

const client = new BedrockRuntimeClient({ region: 'us-east-1' });

export async function handler(event: any) {
  const prompt = JSON.parse(event.body).text_prompts;
  const input = {
    modelId: 'stability.stable-diffusion-xl-v0',
    contentType: 'application/json',
    accept: 'application/json',
    headers: {
      'Access-Control-Allow-Headers': 'Content-Type',
      'Access-Control-Allow-Origin': '*',
      'Access-Control-Allow-Credentials': true,
      'Access-Control-Allow-Methods': 'POST'
    },
    body: JSON.stringify({
      text_prompts: prompt,
      cfg_scale: 10,
      seed: 0,
      steps: 50
    })
  };

  try {
    const command = new InvokeModelCommand(input);
    const response = await client.send(command);

    const blobAdapter = response.body;

    const textDecoder = new TextDecoder('utf-8');
    const jsonString = textDecoder.decode(blobAdapter.buffer);

    try {
      const parsedData = JSON.parse(jsonString);
      return parsedData.artifacts[0].base64;
    } catch (error) {
      console.error('Error parsing JSON:', error);
      return 'TextError';
    }
  } catch (error) {
    console.error(error);
  }
}

Enter fullscreen mode Exit fullscreen mode

Now deploy your Lambdas, if you're using Serverless Framework you can use the following configuration:

service: aws-bedrock-ts
frameworkVersion: '3'

provider:
  name: aws
  runtime: nodejs18.x
  iam:
    role:
      statements:
        - Effect: 'Allow'
          Action:
            - 'bedrock:InvokeModel'
          Resource: '*'

functions:
  bedrockText:
    handler: src/bedrock/text.handler
    name: 'aws-bedrock-text'
    events:
      - httpApi:
          path: /bedrock/text
          method: post
  bedrockImage:
    handler: src/bedrock/image.handler
    name: 'aws-bedrock-image'
    events:
      - httpApi:
          path: /bedrock/image
          method: post
Enter fullscreen mode Exit fullscreen mode

Let's test our functions in Postman:

Text

Create a new POST request with the following data:

  1. URL: add the URL to your newly created Text Lambda
  2. Body: add the following body:
{
    "prompt": "Your search text"
}

Enter fullscreen mode Exit fullscreen mode

Images

Create a new POST request with the following data:

  1. URL: add the URL to your newly created Image Lambda
  2. Body: add the following body:
{
    "text_prompts": [
        {
            "text": "Your search text"
        }
    ],
    "cfg_scale": 10,
    "seed": 0,
    "steps": 50
}
Enter fullscreen mode Exit fullscreen mode

Now that your functions are prepared to be utilized with API Gateway, you can begin integrating them into your applications, much like the example I presented in the beginning of this article.

Note a couple of things

As this is a local app for testing, I've set the Access-Control-Allow-Origin to *. Additionally, you may need to adjust the CORS settings in API Gateway. Please be aware that there will be a small cost associated with your API calls. For detailed pricing information, refer to the Amazon Bedrock pricing model.

Conclusion

Amazon Bedrock provides a robust selection of high-performing foundation models from top AI companies. Integrating it into your application is straightforward and enhances its capabilities. If you haven't tried it yet, I highly recommend doing so!

Interested in the complete project? Feel free to let me know, and I'll create a part two that covers the UI aspect.

Top comments (0)