šš» This is a step-by-step guide on how to build an interactive web application with render interactive elements and integrate Amazon S3
and Amazon Bedrock
with the Streamlit
application. With custom-designed and interactive web UI, we are able to showcase a complete data exploration application along with generative AI capabilities.
(Note: to better focus on the deployment process, some pre-requisite steps such as Amazon EC2 configuration are not covered in the guide)
- Use Case
- AWS Architecture
- Step-by-step guide on deployment
- Connect to virtual machine using EC2 instance connect
- Deploy the Streamlit Application to Amazon EC2
- Application Codes Tour
- Build Streamlit Basic WebUI
- Build an interactive file upload webpage
- Build a generative AI image generator
- Conclusion
Streamlit
isĀ an open-source Python library that makes it easy to create and share custom web apps. Streamlit
lets you transform Python scripts into interactive web apps in minutes, instead of weeks.
There are a number of use cases by building with Streamlit
, such as:
- Building dashboards and data apps
- generate reports from large documents
- or create generative AI chatbots.
With the current spiking trends on LLM
applications, Streamlit
allows develop to deliver dynamic interactive apps with only a few lines of code.
Amazon Bedrock
is a fully managed service that offers a choice of high-performing foundation models
(FMs) through a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. [2]
Amazon Bedrock
takes advantages of the latest generative AI innovations with easy access to a choice of high-performing foundation models FMs from leading AI companies, such as Meta, Mistral AI, Stability AI, and Amazon.
There are many different foundational models
available in Amazon Bedrock
including text, chat, and image. Model Evaluation on Amazon Bedrock
allows you to use automatic and human evaluations to select FMs for a specific use case. To tailor to your own needs, you can go from generic models to ones that are specialized and customized for your business and use case. [3]
In this use case, we will utilize Amazon Titan
models to build generative AI applications as well as Streamlit
ās easy-to-deploy framework, we can easily develop our own data and AI products.
1. Use Case
In this use case, we are going to build an interactive web app which allows users to explore data, upload files and create generative AI photos.
We will embed Amazon Bedrock
FMs to the application, deploy a Streamlit
application to an Amazon EC2
instance, and allow users to begin interacting with the application.
2. AWS Architecture
In the development process, we will:
- Deploy a
Streamlit
application toAmazon EC2
- Render
Streamlit
elements such as chatbot function in a web application - Integrate
Amazon S3
andAmazon Bedrock
with aStreamlit
application
3. Step-by-Step guide on deployment
There are a number of pre-requisite steps before deploying the actual Streamlit applications. Before starting below steps, you should have:
- create an
Amazon EC2
instance - setup an
S3
bucket to store uploaded files - create a Github repository to hold
Streamlit
python codes
3.1 Connect to virtual machine using EC2 instance connect
In EC2
, right click the pre-configured EC2 name, connect to an EC2 instance using EC2 Instance Connect and access a shell.
The following deployment process will be completed in the Shell environment.
3.2 Deploy the Streamlit Application to Amazon EC2
Note: there will be detailed walkthrough of the application code in section 4.
In the terminal, use the following set of commands to configure the AWS account credentials:
aws configure set aws_access_key_id <Your acess_key_id> &&
aws configure set aws_secret_access_key <Your acess_key> &&
aws configure set default.region <Your AWS region>
In the above command, provide your AWS account credentials to be configured.
The following command will display the Amazon S3
bucket name required by the Streamlit
application:
echo $BUCKET_NAME
In this use case, the S3 bucket name is displayed:
Enter the following command to clone GithubĀ directory:
git clone https://github.com/<Your github directory>.git
You may setup different branches for your github repo to store application code.
To deploy the application, enter the following set of commands:
cd src/ && pip install -r requirements.txt
This command installs the required Python packages for the Streamlit application.
To start the Streamlit application with webUI (python code walkthrough will be in the next section):
streamlit run Basics.py
This command starts the Streamlit application on the EC2 instance.
TheĀ Basics.py
Ā file is the main application file that you will run. Streamlit will utilize theĀ pages/
Ā directory to render the additional pages in a sidebar.
Copy the external URL, and open a new tab to start the Streamlit application.
4. Application Codes Tour
You may store the Streamlit application code is aĀ src/
Ā directory.
To run the application, first install the necessary libraries and listed them in theĀ requirements.txt
Ā file.
streamlit
boto3
pandas
streamlit_pdf_viewer
- TheĀ
streamlit
Ā library will be referenced in each of the application files. This library contains the UI elements that will render the various widgets and fields used in the application. -
boto3
Ā will be used to interact with the AWS services. To use this library, you must configure AWS credentials within the environment the Streamlit application is served. - TheĀ
pandas
Ā library is used to present data within the application. -
streamlit_pdf_viewer
is a third-party, custom Streamlit component that allows you to control how PDF files are displayed.
4.1 Build Streamlit Basic WebUI
Refer to below Python code to create a simple basic WebUI that display
import streamlit as st
import pandas as pd
import random
# Streamlit page configuration
st.set_page_config(layout="wide", page_title="Streamlit Basics")
st.title("Streamlit Basics")
# Streamlit container
with st.container(border=True):
# Tabs
text, data, chat, markdown = st.tabs(["Text", "Data", "Chat", "Markdown Editor"])
# Tab content
# Distplaying text
with text:
st.title("Titles")
st.divider()
st.header("Headers")
st.subheader("Subheaders")
st.text("Normal text")
st.markdown("***Markdown Text***")
st.code("for i in range(8): print(i)")
# Displaying data as a data frame
with data:
st.subheader("Display data using a data frame")
df = pd.DataFrame(
{
"name": ["Spirited Away", "Princess Mononoke", "My Neighbor Totoro"],
"url": ["https://m.imdb.com/title/tt0245429/", "https://m.imdb.com/title/tt0119698/", "https://m.imdb.com/title/tt0096283/"],
"reviews": [random.randint(0, 1000) for _ in range(3)],
"views_history": [[random.randint(0, 5000) for _ in range(30)] for _ in range(3)],
}
)
st.dataframe(
df,
column_config={
"name": "Title",
"reviews": st.column_config.NumberColumn(
"Reviews",
help="Total number of reviews",
format="%d ā",
),
"url": st.column_config.LinkColumn("IMDb page"),
"views_history": st.column_config.LineChartColumn(
"Views (past 30 days)", y_min=0, y_max=5000
),
},
hide_index=True,
)
# Chat input and variables
with chat:
st.subheader("Enter in a prompt to the chat")
prompt = st.chat_input("Say something")
if prompt:
st.write(f"You entered the following prompt: :blue[{prompt}]")
# Displaying Markdown and editor
with markdown:
st.subheader("Edit and render Markdown")
md = st.text_area('Type in your markdown string (without outer quotes)')
with st.container():
st.divider()
st.subheader("Rendered Markdown")
st.markdown(md)
š” Containers (st.container()
) can be inserted into an app to provide structure for elements you choose to place inside.
Tabs within the container are created using theĀ st.tabs()
Ā call. This method accepts a list of tab names as an argument and outputs separate tab objects.
TheĀ st.dataframe
Ā element accepts theĀ df
Ā object as an argument along with aĀ column_config
. This configuration dictates how the data frame is displayed on the page.
Display data using a data frame:
A simple chatbot that allows user to enter prompt:
This Streamlit Basics webpage displays simple data and text information, as well as a chatbot function.
Further use cases could be extended to store data portfolios with an embedded chat features that allows users to ask questions about the data.
4.2 Build an interactive file upload webpage
Refer to below Python code to create an interactive webpage for user to upload file to S3 bucket:
import os
import boto3
import streamlit as st
from streamlit_pdf_viewer import pdf_viewer
from io import BytesIO
# Amazon S3 client
s3 = boto3.client('s3')
bucket_name = os.environ['BUCKET_NAME']
st.set_page_config(layout="wide")
# Streamlit columns
upload_s3, read_s3 = st.columns(2)
# Column 1: Upload to Amazon S3 using Boto3
with upload_s3:
st.subheader("Upload to Amazon S3")
obj = st.file_uploader(label=f"Uploading to: :green[{bucket_name}]")
if obj is not None:
s3.upload_fileobj(obj, bucket_name, obj.name)
# Column 2: Read from Amazon S3 using Boto3
with read_s3:
st.subheader("Read from Amazon S3")
response = s3.list_objects_v2(Bucket=bucket_name)
object_list = []
if 'Contents' in response:
for obj in response['Contents']:
if not obj['Key'].endswith('/'):
object_list.append(obj['Key'])
else:
st.write(f"S3 bucket is empty")
selected_obj = st.selectbox(f"Selecting from: :green[{bucket_name}]", object_list, index=None)
st.caption(f"You selected: :blue[{selected_obj}]")
st.divider()
# Displaying the selected Amazon S3 object
if selected_obj is None:
st.caption("Please select an object from S3 bucket")
else:
response = s3.get_object(Bucket=bucket_name, Key=selected_obj)
body = response['Body'].read()
# Displaying the object based on the file type
if selected_obj.endswith(".png") or selected_obj.endswith(".jpg"):
st.image(BytesIO(body))
elif selected_obj.endswith(".pdf"):
pdf_viewer(body)
else:
st.write(body.decode('utf-8'))
š” TheĀ boto3Ā library will create anĀ s3Ā client that will interact with the S3 service. The customĀ streamlit_pdf_viewerĀ component and theĀ BytesIOĀ module will aid in rendering selected S3 objects to the page.
Letās test it with a file upload. In the left column (refer to column 1 in above code), we upload a PitchBook PDF document.
Then we go to S3 to verify if the PDF has been uploaded successfully.
4.3 Build a generative AI image generator
Refer to below Python code to create Amazon Bedrock Titan Image Generator UI:
import boto3
import json
import base64
import streamlit as st
from io import BytesIO
# Amazon Bedrock client
bedrock = boto3.client('bedrock-runtime')
bedrock_model_id = "amazon.titan-image-generator-v1"
# Convert image data to BytesIO object
def decode_image(image_data):
image_bytes = base64.b64decode(image_data)
return BytesIO(image_bytes)
# Invoke Bedrock image model to generate image
def generate_image(prompt):
body = json.dumps(
{
"taskType": "TEXT_IMAGE",
"textToImageParams": {
"text":prompt
},
"imageGenerationConfig": {
"numberOfImages": 1,
"quality": "standard",
"height": 768,
"width": 768,
"cfgScale": 8.0,
"seed": 100
}
}
)
response = bedrock.invoke_model(
modelId="bedrock_model_id",
accept="application/json",
contentType="application/json",
body=body
)
response_body = json.loads(response["body"].read())
image_data = response_body["images"][0]
return decode_image(image_data)
# Streamlit UI
with st.container():
st.header("Amazon Bedrock Titan Image Generator", anchor=False, divider="rainbow")
input_column, result_column = st.columns(2)
# Text Input
with input_column:
st.subheader("Describe an image", anchor=False)
prompt_text = st.text_input("Example: Two dogs sharing a bowl of spaghetti", key="prompt")
# Generate and Clear buttons
# Clear field function accessing session state
def clear_field(prompt):
st.session_state.prompt = prompt
generate, clear = st.columns(2, gap="small")
with generate:
generate_button = st.button("Generate", use_container_width=True)
# Clear field callback
with clear:
st.button('Clear', on_click=clear_field, args=[''], use_container_width=True)
# Resulting image column
with result_column:
st.subheader("Generated image", anchor=False)
st.caption('Your image will appear here.')
if generate_button:
# Displays spinner + message while executing the generate_image function
with st.spinner("Generating image..."):
image = generate_image(prompt_text)
st.image(image, use_column_width=True)
š” TheĀ generate_image
Ā function passes theĀ imageGenerationConfig
,Ā taskType
, and userĀ prompt
Ā to the BedrockĀ invoke_model
Ā method. The method will return a JSON body that is parsed to retrieve the generated image.
The generated image is decoded using theĀ decode_image
Ā function and returned.
TheĀ clear_field
Ā function interacts with the Streamlit session state. It accepts a prompt and updates the value of theĀ st.session_state.prompt
. Session state is used to store and persist state that can be manipulated with the use of callback functions.Ā clear_field
Ā is a callback function that will get invoked when a user clicks theĀ Clear
Ā button.
TheĀ result_column
Ā checks if theĀ generate_button
Ā value is true and calls theĀ st.spinner
Ā element to display a temporary message as theĀ generate_image
Ā function works in the background.
The resulting image is then passed to aĀ st.image
Ā element to render it on the page.
Letās try the image generator to create an image with entered prompt:
5. Conclusion
In this article, we have
- Introduce
Streamlit
python library to create data app and interactive app with simple minimal codes - Deployed the Streamlit application to an
Amazon EC2
instance. - Interacted with each application webpage with its interactive features
- Go through the application codes that integrate
Amazon Bedrock
andAmazon S3
Reference and further readings:
- What is streamlit. https://github.com/streamlit/streamlit
- What is Amazon Bedrock? https://aws.amazon.com/bedrock/
- Amazon Bedrock Developer Experience. https://aws.amazon.com/bedrock/developer-experience/
- Quickly build Generative AI applications with Amazon Bedrock https://community.aws/content/2ddby9SeCKALvSz0CWUtx4Q4fPX/amazon-bedrock-quick-start?lang=en
Top comments (1)
Well done ! Looking forward to more articles on writing code with Bedrock models and fine-tuning best practices! @abdullahparacha