DEV Community

Cover image for AWS Lambda Function with Bedrock, Javascript SDK, Serverless framework Part one
Mohammed Ismaeel for AWS Community Builders

Posted on • Edited on

AWS Lambda Function with Bedrock, Javascript SDK, Serverless framework Part one

Introduction

Like many others passionate about the advancements in Generative AI, I've always amazed about the incredible possibilities it holds. However, the complexities of training and creating AI models often deter individuals, including myself, due to the time and expertise required. That's why when AWS introduced Bedrock's general availability, it felt like a game-changer.

ٍ_ٍSo what is Amazon Bedrock? _
Amazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case. With Bedrock’s serverless experience, you can get started quickly, privately customize FMs with your own data, and easily integrate and deploy them into your applications using the AWS tools without having to manage any infrastructure.

In this article, we will explore the synergy between AWS Lambda and the Bedrock using AWS javascript SDK, demonstrating how this integration empowers developers to seamlessly incorporate machine learning models into their applications. AWS Bedrock opens doors to a world of innovative possibilities, enabling developers to leverage the power of Generative AI effortlessly. I will not talk about the cost in this article lets wait with for now.

Understanding the Code

The provided code snippet showcases an AWS Lambda function written in JavaScript. It utilizes the Bedrock Runtime Client, a part of the AWS SDK, to interact with machine learning models. Let's break down the key components of the code:

  1. Setting Up the Client: The code initializes the Bedrock Runtime Client by specifying the AWS region as "us-west-2".
const { BedrockRuntimeClient, InvokeModelCommand } = require("@aws-sdk/client-bedrock-runtime");

const client = new BedrockRuntimeClient({ region: "us-west-2" });
Enter fullscreen mode Exit fullscreen mode
  1. Preparing Input Data: The input object is constructed, containing parameters required to invoke a machine learning model. Notably, the body property is a JSON string representing the input data for the model.
const input = {
    "modelId": "ai21.j2-mid-v1",
    "contentType": "application/json",
    "accept": "*/*",
    "body": JSON.stringify({
      "prompt": prompt,
      "maxTokens": 200,
      "temperature": 0.7,
      "topP": 1,
      "stopSequences": [],
      "countPenalty": { "scale": 0 },
      "presencePenalty": { "scale": 0 },
      "frequencyPenalty": { "scale": 0 }
    })
  };
Enter fullscreen mode Exit fullscreen mode
  1. Invoking the Model: The InvokeModelCommand is used to send a request to the machine learning model, passing the prepared input.
try {
    const data = await client.send(new InvokeModelCommand(input));
    const jsonString = Buffer.from(data.body).toString('utf8');
    const parsedData = JSON.parse(jsonString);
    const text = parsedData.completions[0].data.text;
    console.log('text', text);
    return text
  } catch (error) {
    console.error(error);
  }
Enter fullscreen mode Exit fullscreen mode
  1. Handling the Response: The response from the model is parsed from a buffer to a JSON string. The parsed data is then accessed to extract the generated text, which is logged to the console.
try {
    const data = await client.send(new InvokeModelCommand(input));
    const jsonString = Buffer.from(data.body).toString('utf8');
    const parsedData = JSON.parse(jsonString);
    const text = parsedData.completions[0].data.text;
    console.log('text', text);
    return text
  } catch (error) {
    console.error(error);
  }
Enter fullscreen mode Exit fullscreen mode
  1. I have created a simple Serverless application using Serverless framework and javascript as a template for simplicity and here is the all code for the handler and serverless.yml file
const { BedrockRuntimeClient, InvokeModelCommand } = require("@aws-sdk/client-bedrock-runtime");

const client = new BedrockRuntimeClient({ region: "us-west-2" });

module.exports.bedrock = async (event) => {
  const prompt = event.body.prompt;
  console.log('prompt', prompt);
  const input = {
    "modelId": "ai21.j2-mid-v1",
    "contentType": "application/json",
    "accept": "*/*",
    "body": JSON.stringify({
      "prompt": prompt,
      "maxTokens": 200,
      "temperature": 0.7,
      "topP": 1,
      "stopSequences": [],
      "countPenalty": { "scale": 0 },
      "presencePenalty": { "scale": 0 },
      "frequencyPenalty": { "scale": 0 }
    })
  };

  try {
    const data = await client.send(new InvokeModelCommand(input));
    const jsonString = Buffer.from(data.body).toString('utf8');
    const parsedData = JSON.parse(jsonString);
    const text = parsedData.completions[0].data.text;
    console.log('text', text);
    return text
  } catch (error) {
    console.error(error);
  }
};

Enter fullscreen mode Exit fullscreen mode

serverless.yml

service: bedrock
# app and org for use with dashboard.serverless.com

frameworkVersion: "3"

provider:
  name: aws
  runtime: nodejs18.x
  region: us-west-2


functions:
  bedrock:
    handler: handler.bedrock

Enter fullscreen mode Exit fullscreen mode

And at the end you can see the response of the lambda in the Aws console

Image description

Part Two
In part tow I will add an API gateway and create a simple react app that takes the prompt as an input and presents the generated text by the model.

Top comments (0)