DEV Community

Cover image for Do You need an AI ASSISTANT? Let's build it using Amazon Q! (Part 1)

Do You need an AI ASSISTANT? Let's build it using Amazon Q! (Part 1)

Generative AI is a branch of artificial intelligence that can create new content or data from scratch, such as text, images, audio, or video. It is powered by deep learning models that learn from large amounts of data and generate novel outputs based on a given input or context. One of the most exciting applications of generative AI is building AI assistants that can interact with customers or employees in natural language, provide personalized information or recommendations, and automate tasks or processes.

We are already seeing in the market the launch of AI assistants, using a chat bot experience, that are built to answer questions using natural language using as source of information our own personal or company documents, extending the possibility of use the internet as the information source. In this way, we can have specialized/customized answers for our questions applying a personal/business context allowing us to get more precise and specific insights.

To implement an AI Assistant there are key functional and technical steps that we need to follow:

Define the use case and the target audience: The first step is to identify the specific problem or opportunity that the AI Assistant will address, and the characteristics and needs of the customers or employees who will use it. Let’s define as sample use case that our need is to have an AI Assistant that will be able to provide us with information about two domains: FSI industry trends and AWS services.

Collect and prepare the data: The second step is to gather and process the data that will be used to enrich the answers and evaluate the generative AI Assistant. The data should be relevant, diverse, and high-quality, and should reflect the domains and the context of our use case. For our sample use case, let’s assume that the requirement is to include information from the internet, document repositories and to be able to add ad-hoc relevant documents.

Choose and deploy the generative AI platform: The third step is to select and implement the generative AI platform that will enable the creation and management of the AI Assistant. The platform should be scalable, secure, and easy to use, and should provide tools and features for data ingestion, testing, and deployment. For our use case, we selected Amazon Q for Business as the platform to build our AI Assistant.

Design, build and test the AI Assistant: The fourth step is to design, build and test the AI assistant, ensuring that it meets the functional and non-functional requirements, such as performance, accuracy, reliability, usability, and ethics. The AI assistant should be evaluated and validated by real users, and feedback should be collected and incorporated.

Let’s see now the process and lessons learned during the creation of an AI Assistant using Amazon Q for Business.

First, what is Amazon Q for Business?

Amazon Q for Business is a fully managed, generative AI-powered enterprise chat assistant created by Amazon. It allows organizations to deploy an AI agent within their company to enhance employee productivity. Amazon Q for Business is tailored specifically for organizational use by allowing administrators to connect internal systems and limit access based on user permissions and groups. This ensures employees only get information relevant to their roles from trusted sources within the company.

Let’s start then with the creation of our AI Assistant!

I will show the AI Assistant creation process using the AWS Console, and that assumes that you have an AWS account and an IAM user with administration privileges required for the provision of the resources along the process. Alternatively, it is also possible to execute this process using the AWS CLI.

To create our AI Assistant, we need to create first an Amazon Q Application.

Let’s sign into the AWS Management Console and open the Amazon Q console using the administrative user previously created.

We need first to configure the Amazon Q Application initial settings:

Image description

Note in the image above:

  • Service Access Role: IAM role for Amazon Q to allow it to access the AWS resources it needs to create our application. We can choose to use an existing role or create a new role. The policy associated with this role will also allow Amazon Q to publish information to CloudWatch Logs that will allow us to monitor the data ingestion processes.

  • KMS Encryption Key: Amazon Q encrypts our data by default using AWS managed KMS keys, option selected for this use case. Alternatively, you can use a customer-managed key.

  • Not shown above but requested during the creation process, it is highly recommended to include Application Tags to identify all the resources linked to our Amazon Q application. As an example, in our case, we used the combination “Created_By | bbold-amazonq-app” for all the resources created.

Next, we need to create and select a Retriever for our Amazon Q application. An Amazon Q retriever determines where an Amazon Q conversational agent will search for answers to users' questions. When creating an Amazon Q application, a retriever must be selected. The retriever connects the application to external data sources containing information that can be used to respond to questions. There are two different types of retrievers that can be chosen:

  • Native retriever: Allows connecting directly to data sources like knowledge bases, documentation repositories, or databases using Amazon Q data connectors.

  • Amazon Kendra retriever: Connects to an existing Amazon Kendra index to query its data. Amazon Kendra is an intelligent search service powered by machine learning. Amazon Kendra allows organizations to index documents from multiple sources and provide a unified search experience for internal information.

Let’s see the key Retriever settings:

Image description

Note in the image above:

  • Retriever: We chose "Use native retriever" to build an Amazon Q retriever for our Amazon Q application. When we select the native retriever for an Amazon Q application, Amazon Q will create an index to connect to and organize the data sources configured for the application. While the native retriever index is not an Amazon Kendra index, both serve a similar purpose of housing and organizing content for retrieval. The main difference is the native retriever index is managed internally by Amazon Q, whereas a Kendra index would be a separate service integration.

  • Index provisioning: When creating an Amazon Q application, the index provisioning number of units refers to the storage capacity allocated for the application's index. Each unit in the index corresponds to 20,000 documents that can be stored. Each storage unit includes 100 hours of connector usage per month. The first storage unit is available at no charge for the lesser of 750 hours or 31 days. We selected 1 storage unit to try out Amazon Q without incurring charges during the initial evaluation period.

  • Similarly, we included Retriever Tags and Index Tags, using the same combination of the Application tags.

Now, we are ready to connect with our data sources! Amazon Q application data sources are repositories of information that can be connected to an Amazon Q application to power the conversational agent's responses. When a user asks a question through the Amazon Q chat interface, the system will search across connected data sources to find relevant answers and responses.

Using the Amazon Q retriever, we can select several cloud and on-premises data sources, as we can see in the following images:

Image description

Image description

From my point of view, this a VERY RELEVANT CAPABILITY of Amazon Q for Business, as the power of our AI Assistant will depend on the variety and quality of the information sources that we will connect, and for that it is very critical that we allow the organizations to leverage their information with security where that information is currently available. This will also contribute a lot to eliminating adoption barriers and reduce implementation time avoiding non-priority information migration efforts.

When we are talking, for example, about business, technical and other types of business documents, we will find them frequently stored in repositories like S3, SharePoint, OneDrive, or Google Drive. If we talk about technical/code repositories, they will typically be stored in a GitHub repository. The GOOD NEWS is that ALL these data sources are already covered by Amazon Q!

Remembering the requirements for our use case: “…include information from the internet, document repositories and to be able to add ad-hoc relevant documents.”, I was able to support this demand using the “Most Popular” data sources shown above. Let’s look at each one of them:

REQ #1: Including information from the Internet

To include information from the web, we will add a data source of type WEB CRAWLER. An Amazon Q Web Crawler connector crawls and indexes either public facing websites or internal company websites that use HTTPS. When selecting websites to index, we must adhere to Amazon Acceptable Use Policy and crawl only our own web pages or web pages we have the authorization to index.

Now let’s follow the typical process to add a data source: We start by specifying the name of our data source and the source of the URLs that we want Amazon Q to index. As you can see in the image below there are multiple options, from specifying an URLs list directly in the console to specify Sitemaps (Sitemaps contain lists of URLs that are available for crawling on a site to help crawlers comprehensively retrieve and index content).

Image description

For our use case, I decided to use the option to provide a Source URLs file stored in an S3 Bucket, as we will use the same bucket also as complementary data source later. The Source URLs file is just a text file that includes one URL per line. We can include up to 100 starting point URLs in the file. As example, we included in the file a list of URLs from our own website:

After we define the list of URLs, we need to configure the security aspects of our connection to them. For that Amazon Q offers several authentication alternatives. For this example, we will use “No Authentication” as we are crawling a public website.

Image description

In case you need to index internal websites, it also offers the option to use a Web Proxy and configure the authentication credentials using AWS Secret Manager.

In terms of security also, for selected data source types, Amazon Q provides optionally the alternative to configure a VPC and security group that will be used by Amazon Q data source connector to access your information source (Examples: access an S3 bucker or a database through a specific VPC). Since our data source is accessible from the public internet, we didn't need to enable the Amazon VPC feature.

Image description

Next, we need to configure the IAM Role for Amazon Q to access our data source repository credentials and application content. Being the first time that we are creating this data source, the recommendation is to create a new service Role and provide a name for it:

Image description

In the background, it will create an IAM role and a Customer Managed Policy to allow Amazon Q to access the S3 bucket where the source URLs file is stored, access authentication credentials stored in AWS Secret Manager (if applicable), manage the processing of the documents to be ingested and store information related to groups and users access to those documents.

Moving on, we need to define the Synchronization Scope for the documents to be ingested. We can define a sync domain range as seen in the following image:

Image description

For our use case, we selected the option “Sync domains with subdomains only” to prevent the scenario of indexing other third-party websites potentially linked from our company website.

Additionally, Amazon Q also offers very interesting additional options regarding the scope of documents to be indexed (“Additional Configuration”). I would like to highlight some aspects of them:

  • “Scope settings | Crawl depth”: The number of levels from the seed URL that Amazon Q should crawl. This parameter is important to ensure that all the webpages that we need to be indexed are effectively included. Recommendation here is to review the structure of the websites that you are planning to index and see how many levels you need to include. For example, a crawl depth of “3” means that the search engine will crawl up to 3 levels deep from the seed URL. For example: Seed URL (level 1), Pages directly linked from the seed URL (level 2) and Pages directly linked from level 2 pages (level 3).

  • “Scope settings | Maximum file size”: Where you can define the maximum file size of a webpage or attachment to crawl. You need to calibrate this parameter based on your knowledge of the documents sizes that are in your knowledge base.

  • “Include files that web pages link to”: When this option is selected in Amazon Q, it means that the crawler will index not only the content of the web pages specified in the seed URLs, but also any files that are linked from those web pages. This allows the full content being referenced from the web pages to be searchable (Some examples of files that may be linked from web pages include documents like PDFs, images, videos, audio files, and other attachments).

  • “Crawl URL Patterns” and “URL Pattern to Index”: Both parameters will help you to filter the scope of information to crawl and then to index. Amazon Q will index the URLs that it crawled based on the crawl configuration. The “crawl URL patterns” specify which URLs should be crawled to the specified crawl depth starting from the seed URLs. The “URL patterns to index” configuration can further target which of the crawled URLs should be indexed and searchable.

Once we are done with defining the sync scope and filters, we need to work with the configurations related to the synchronization jobs execution. For that we need to configure how and when we need those processes to run:

Image description

As you can see above, for the how to sync (Sync Mode) we can choose from make a full synchronization or only sync the updates. We selected the second option for our use case. In terms of the schedule (Sync run schedule), you select the frequency for the synchronization, from hourly to monthly, a custom time or On Demand. For our testing, we selected this last option.

Similarly to what we did for the Amazon Q Application, Retriever and Index, we can also specify Tags for the Data Source. Again, it’s highly recommended that you include tags to identify all resources that are related to your Amazon Q application for cost management purposes.

Image description

Finally, Amazon Q shows the Fields Mapping section. Amazon Q crawls data source document attributes or metadata and maps them to fields in your Amazon Q index. Amazon Q has reserved fields that it uses when querying your application. It shows the list of default attributes mapped for both web pages and attachments (they can be customized after data source creation):

Image description

With this, you can finish the configuration of the new WEBCRAWLER data source with the “Add Data Source” option. Having done that, we need to execute the first synchronization job that will run in a Full sync mode independently of your configuration. After you complete this process you can see the Details, Sync History, Settings, and Tags of your Amazon Q Data Source in the console:

Image description

In the image above, would like to highlight the Sync run history section where you can see the synchronization jobs results in terms of total items scanned, added/modified, deleted, and failed where you have quantitative information to evaluate if the crawling/indexing process have processed all what you expected based on your configuration. At this point, Amazon Q also offers the possibility to retrieve log information from CloudWatch Logs using a link in the “Details” column:

Image description

As you can see above, Amazon Q creates a Log Group related to the Amazon Q application using the Application ID (“99fd2bd8-bc98-4d34-8153-54a2a3b189b3”) as identifier and creates a Log Stream for each execution using the Data source ID (“69374b88-5f35-4b3b-a0cc-0bfb2742c23a”) as identifier.

Amazon Q already creates the Query that we can execute using Log Insights, and running it, you will get the logs related to the synchronization job where you will be able to see details about the URLs processed and eventually errors that must be fixed.

REQ #2: Including information from document repositories

Having completed the configuration of a WEBCRAWLER data source, we can move forward to our second requirement where we will leverage S3 buckets with documentation to be used by our AI Assistant. We need to configure an S3 data source for each bucket we plan to index.

The configuration of this data source includes very similar steps to the ones we already saw for the WEBCRAWLER, including data source name, configuration of utilization of a VPC or not, IAM role creation, Sync scope, mode and run schedule, tags and finally field mapping.

Image description

Same as in the case of the WEBCRAWLER data source, in this case you can also use Log Insights to get the logs related to the synchronization job where you will be able to see details about the documents processed and eventually errors that must be fixed.

REQ #3: Including information from add ad-hoc relevant documents

While I recommend storing and organizing your documentation in an S3 bucket for durability, protection, and security purposes, we may need also to upload specific ad-hoc files to expand the knowledge base of our AI Assistant. For those cases we can use the FILE UPLOADER data source:

Image description

Image description

As you can see in the image above, you will be able to see in the console the list of uploaded files that will be indexed and be part of the knowledge base of our AI Assistant.

Once we have created all our required data sources, completed the initial synchronization jobs, and verified that they have finished successfully and indexed our documentation, we will be ready to TEST our AI assistant! For that AWS has already built a WEB EXPERIENCE which is a web interface of our AI Assistant / CHAT BOT application that we can use to start interacting with it.

We can access the WEB EXPERIENCE, through the console:

Image description

And open our AI Assistant / CHAT BOT application interface:

Image description

As you can see above, we can configure some attributes like the Title (Name of the AI Assistant), Subtitle (Objective of the AI Assistant) and Welcome Message. Our suggestion is to include in the Welcome Message what are the domains or subjects that the AI Assistant will be prepared to answer questions about using the information that you provided through Data Sources. This is very important to manage expectations from the potential users of the solution.

Now, let’s see some examples of our AI Assistant in action!

SAMPLE PROMPT # 1 – Asking a question about industry trends (looking to use documents in the FILE UPLOADER): What are the key trends in 2024 for Corporate Banking?

Image description

Note that the answer includes the reference to the information source used by the Assistant to prepare the response and allows you to provide feedback regarding the quality of the response.

SAMPLE PROMPT # 2 – Asking a question about AWS services (looking to use documents in the S3 BUCKET): “Please prepare a summary of how AWS is supporting Banking Clients to improve their customers experience, including reference to customer examples and which AWS services are being used”

Image description

SAMPLE PROMPT # 3 – Asking a question about DEVOPS (looking to use documents in a WEBPAGE): “Please prepare a summary about how we can help our customers to accelerate the implementation of DEVOPS culture in their organizations”

Image description

As you can see, using Amazon Q for Business we have been able to implement an AI Assistant and as from here we can continue enriching the KNOWLEDGE BASE adding more documents to the data sources and/or adding more data sources as well as preparing our PROMPTS LIBRARY with templates that we can reuse to improve our productivity and maximize the quality of the responses obtained from the AI Assistant.

Important to highlight that those will be ON-GOING activities as we want to have our Assistant always UP TO DATE with our latest and verified documentation and we want also our team improving their productivity, knowing when and how to use the AI Assistant.

As with any other Cloud technology adoption process, it’s very critical to include a proper Organizational Change Management initiative to make sure that the team is properly engaged, communicated, and trained on this technology making sure that they understand that is a valuable tool at their disposal to gain productivity and efficiency that DOES NOT ELIMINATE the need for the “human in the loop” evaluation of the quality and applicability of the responses before their utilization to fulfill internal or customer demands.

This is a critical success factor to eliminate adoption barriers and very valuable feedback for the IT Team to be used for data source contents refinement and enrichment.

Up to this moment, we have our AI Assistant already operating for internal use. We need now to take the next step to DEPLOY the AI Assistant to our organization so we can have multiple users leveraging its capabilities with the required security measures in place.

Let's meet again in our next post and please feel free to share your feedback and comments!

Top comments (0)