DEV Community

Cover image for Bedrock(ing) the Kasbah

Bedrock(ing) the Kasbah

GenerativeAI (GenAI) is all the rage nowadays. And for good reason! It is a powerful capability and is rapidly being integrated into many of the services and tools we use day to day.

I've played around with PartyRock (I'm actually running an upskilling gamification event for my company next month that is built on PartyRock), but decided it was past time to investigate Bedrock itself. For those who aren't tracking, PartyRock is a no-code app-generation tool that uses the foundation models available in Bedrock to take natural language prompts from its users and convert them into Large Language Model (LLM) powered apps.

Ok, so back to Bedrock...

Bedrock is a fully managed service (read, serverless) that AWS made generally available in September 2023. It is designed to make foundation models (FMs) accessible through console or API and to serve as a jump start for users who want to integrate LLMs into their solutions. You can customize the models with your own data so that their responses are tuned more specifically to what you want to see, but you don't have to.

Getting access to the foundation models

The first thing to note when starting to use Bedrock is that you actually have to specify which FMs you want to be able to use. You accomplish this by visiting the Bedrock console and clicking "Get Started", then scrolling down to "Model access". Then you'll need to click the "Manage model access" button to get to where you can select which models you want to be available in your account.

Important note: You will need to be using an account with the correct IAM permissions set in order to manage the model access. What are those permissions, you ask? The easy button is to provide yourself with the managed policy AmazonBedrockFullAccess but of course that isn't a great way to go about it for production systems. It'll work for experimentation purposes though. This role was created in December 2023 along with the AmazonBedrockReadOnly managed policy.

Second Important note: If you want to request access to Anthropic's models, you have to provide a use case justifying said access. I didn't feel like doing that (plus I'm not sure a use case of "Because I wanna play with it" would be sufficient), so I requested access to Titan Text G1 - Express, Titan Image Generator G1, Jurassic-2 Ultra, and SDXL 1.0. A nice blend of text and image FMs.

Third Important note: Access to models that aren't owned by Amazon is not instantaneous. The Titan models I requested showed as "Access Granted" immediately. The others took a little longer. Also important to note - even though the models showed as "Access Granted" in the Model access screen, they didn't show as available in the Providers screen as quickly.

I have some FMs to work with. Now what?

For basic experimentation, you can go to the playgrounds. I started with the Image playground, because of course I did. Images are fun!

First screen capture of Amazon Bedrock
As you can see in the image, there are some configurations you can tune. You can provide a reference image to help the generator do its work. You can choose whether you want to generate a whole new image or edit an existing image. If you choose to generate and image, you can specify things to exclude from the image using the negative prompt. From my experience, the negative prompt is hit or miss; I entered in a few different things to exclude and the generator sometimes listened and sometimes did not. Kind of like my dog Loki!
Screen capture of the Amazon Bedrock Image playground As shown in this image, here's an instance of Loki not listening...

I like that you can adjust the number of images the generator should offer; you can choose between 1 and 5, and it defaults to 3 in the console or 1 in the API call. You can adjust the prompt strength as well. A higher prompt strength forces higher alignment to the prompt. You can set this value between 1.1 and 10.0, and it defaults to 8.0. You can also set the height and width of the generated image(s), as long as you set within the permitted values identified on this documentation page. You can also set the seed value if you would like to see similar images run to run.

It took me several tries playing with my prompt before I ended up with images that I was happy with. You can see the final image below, and following that I've included the code for the API call that produced it.

Purple unicorn jumping over the earth, with a starry sky and the moon behind it

aws bedrock-runtime invoke-model \
--model-id amazon.titan-image-generator-v1 \
--body "{\"textToImageParams\":{\"text\":\"Photo-realistic purple unicorn in space jumping over the moon, with earth visible in the background. Unicorn must have 4 legs and 1 horn.\",\"negativeText\":\"There should not be any ground visible in the picture\"},\"taskType\":\"TEXT_IMAGE\",\"imageGenerationConfig\":{\"cfgScale\":9.5,\"seed\":0,\"quality\":\"premium\",\"width\":512,\"height\":512,\"numberOfImages\":3}}" \
--cli-binary-format raw-in-base64-out \
--region us-east-1 \
invoke-model-output.txt

Careful readers may notice that the generator reversed the moon and earth in the image compared to the actual request.

The good and the not so good

As always with image generation, the quality of the creation will be largely dependent upon the quality of the prompt. My prompt was ok, not great, so the image generated was cool, but not perfect. You can learn much more about prompt engineering and how to use it more effectively by reading through the Amazon Bedrock user guide section on Prompt engineering. There's a bunch of really good examples and recommendations there!
So, a short list of the good:

  • Wide range of FMs available to use
  • Good guidance and explanation on use of the FMs embedded into the console itself
  • Great ability to invoke the FMs in Bedrock using the API
  • The playground is lovely for helping you test out your prompts and tuning before releasing the model to the world
  • The credits I have in my account can be applied to Amazon Bedrock

And a short list of the not so good:

  • Claude (Sonnet, Haiku, Instant) isn't available unless you provide a use case. This makes me sad, even though I'm sure it's an Anthropic requirement, not an AWS requirement
  • If you wish to train (and more importantly use) your own custom model built on a FM, then you have to purchase provisioned throughput, which must be purchased in 1-month or 6-month commitment terms
  • Pricing varies widely between the different models. Make sure you check the Bedrock pricing page for details before you go down the path of deploying a Bedrock-sourced model into production

That's it. I'm collecting data to train a custom model off of a base model, but don't have enough yet. I'll try to pop back over here once I've had a chance to do that to walk y'all through that process. Until then, go Bedrock the Kasbah, and have some fun integrating GenAI into your applications!
A blue-toned kasbah

Top comments (0)