For enterprises and scaling startups, managing information about your product, customers, and business is challenging. Team members need to reference information stored in a lot of different places, which can make it harder for teams to onboard, ramp up, and get their work done. Unless your company has a very, very organized and centralized knowledge base—or the luxury of employing a dedicated librarian— few folks will know how to find every answer.
When new team members run into questions about best practices and effective customer interactions, it’s not uncommon for them to struggle to find the precise information they need. As a result, they rely on their colleagues for context and expertise—and while collaboration and supportiveness are great, that’s not exactly a scalable system.
However, if expert knowledge can be unlocked and made instantly accessible to internal collaborators it can be a game changer that saves time and energy—and equips everyone with the necessary information to be successful in their work.
So, that’s what we did.
Our friendly neighborhood “knowledge base bot” helps our Sales, Support, and Success teams quickly answer technical questions for customers—and has become one of our most actively used internal applications. If you too have lots of great documentation and are eager to leverage its full potential, read on to learn how we built and trained an AI-powered enterprise search tool in two days (!!) using Retool, GPT-4, and a vector database.
The challenge: Scaling the knowledge of Retool’s field engineering team
For field engineers here at Retool, each day presents a different set of challenges—from debugging a deployment on Kubernetes to advising customers on the most performant way to architect a Retool app. The team is constantly striving to respond to questions and resolve issues, which allows us to develop a deep understanding of both our product and customers.
But as the field engineering team has grown, it’s gotten harder to capture and share institutional knowledge with everyone who might benefit from it. For example, many team members were encountering the same deployment issue, and each person was independently figuring out a solution. Problem-solving like this, while admirable, is inherently inefficient. Instead of solving the same issue five times, we could have addressed five different problems. To reduce redundancy, the field eng team tried designating one person as the domain expert for deployments, but this solution presented a different set of challenges.
It’s generally a known-known that relying solely on an individual as the source of information can create bottlenecks and jeopardize productivity and customer experience. What happens if the subject-matter expert is out sick? Goes on vacation? Leaves the company? To mitigate this risk, field eng proactively creates documentation to preserve and share knowledge and to ensure its accessibility regardless of who’s around on a given day. We publish documents on Confluence, Google Docs, public docs, and other easily discoverable locations for our target users. But while we’d written plenty of docs, at scale that fragmentation was making answers harder and harder to find.
Our aim was to eliminate the need for individuals to search through multiple sources for answers or for multiple people to independently solve the same problem in isolation. This meant we needed a solution that could effectively scale the knowledge within the field engineering team while minimizing repetitive work. And an ideal system would empower team members with instant access to domain knowledge so they could quickly find answers.
The context: How we got started building an AI-powered enterprise search tool
Ultimately, we decided to create an AI-powered enterprise search tool that would transform our diverse internal information hubs into a comprehensive and consolidated knowledge base.
To get there, we embarked on building a knowledge database by integrating data from several different systems that our company uses to track product information and customer engagements. We seeded the database with text found in Confluence, Slack, and our public documentation site. (If you decide to build your own knowledge-base bot, you can opt to seed the knowledge base with text found in these or different systems. The key is to select systems that align with the types of questions you aim to address through the knowledge database.)
We built the enterprise search tool with a team of three engineers in a two-day company hackathon. Initially, the plan involved building the backend from scratch, but we were able to save about a week of development time by using Retool Workflows to build the backend layer. This meant we didn’t have to worry about deploying and managing a separate backend server. Plus, Retool provided seamless integration between the backend and our frontend, which was also built using Retool.
What is enterprise search, anyway?
Enterprise search is the systematic approach of searching, retrieving, and indexing information from diverse resources and datasets within an organization. These can include, for example, documents, emails, Slack messages, databases, and intranet sites. The primary objective of enterprise search is to facilitate easy access to and navigation of relevant information, particularly in large businesses. Enterprise search software typically utilizes advanced data processing techniques, natural language processing, and machine learning algorithms. These technologies work in tandem to deliver accurate, efficient, and secure search results.
AI-powered enterprise search software is developed by training a machine learning model on the knowledge database. We used GPT-4, a model created by OpenAI, but you can use any appropriate large language model (LLM) that suits your requirements. It’s fairly easy to test and swap out different LLMs with Retool by connecting to their APIs and calling them from your Retool apps, so once you’ve explored specs, data privacy and security considerations, and other relevant attributes, you can experiment as you like to find the most suitable LLM for your specific needs.)
Keep in mind that using OpenAI’s API involves the transmission and processing of data by an external party. OpenAI’s privacy policy states that customer data submitted through their API will not be used for training and enhancing their models unless explicit consent is given to share the data for this purpose. Still, it’s always wise to consult with your security team before leveraging an LLM to ensure compliance with your organization’s security measures and protocols.
Augmenting ML models
Once we picked a model, the next step was to determine the best approach for working with it. There are two primary strategies used to augment machine learning models with external sources of data:
- Fine tuning : Train a pretrained model on an external dataset
- In-context learning : Insert relevant data into the input prompt
Fine-tuning is a technique that utilizes a task-specific dataset to train a model and enhance its specialized knowledge. For example, we can use our support ticket catalog to produce a dataset of questions and answers. By training an existing model on this dataset, it can become highly proficient at answering questions similar to those in the training data. Fine-tuning allows the model to excel in solving a specific type of question, improving its accuracy and effectiveness within that particular domain.
In-context learning doesn’t involve retraining an existing model. Instead, it involves incorporating relevant data and additional context into the prompt. This approach enables the model to handle unfamiliar questions without requiring additional training, making the solutions generated by the model more adaptable and generalizable. By decoupling the knowledge base from the model itself, the model can respond effectively to a wider range of queries.
Since our internal knowledge base is diverse and requires the bot to handle unfamiliar tasks, we chose to architect the solution using in-context learning.
The tutorial: How to build an AI-powered enterprise search tool
Okay—let’s get into it. To build a search tool for an enterprise, you’ll need:
- An OpenAI API key (or your preferred LLM key)
- Pinecone (or another vector database)
- Retool Workflows
- Retool Database (or another database)
Here’s the outline that we’ll follow:
- Build a knowledge database
- Seed it with relevant text
- Build the enterprise search tool
Before building the enterprise search tool, our initial step involves creating a knowledge database and seeding it with pertinent data. The knowledge database consists of “threads,” which are units of unstructured text. A thread can represent various forms of communication, such as a conversation in Slack or a specific section of a Confluence document. Each row in the database corresponds to a distinct thread, capturing its content and context.
We opted to utilize a vector database because it has the ability to analyze spatial relationships within the data, and, for our purposes, it allows for better performance than a traditional relational database.
Setting up the vector database
A vector database is a specialized storage system designed to handle spatial data, including geometric objects and their spatial relationships. It provides advanced capabilities for querying and manipulating complex spatial data. Vector databases are frequently used for AI applications such as geospatial analysis, computer vision, and natural language processing. These applications often require a deeper understanding of spatial elements and relationships.
The knowledge database we’re creating has at least two columns: one containing the unstructured text, and another column that stores the vector representation of the corresponding unstructured text. To make searching for and retrieving relevant information efficient, we’ll use a vector similarity search: When a user submits a question, we convert the question to a vector representation, and the search matches the question with the most relevant threads in the database.
In our case, the data model manifested as two tables—a Postgres table and a Pinecone table—which were joined using primary keys.
Seeding the knowledge database
When creating your own knowledge database, you should seed it with data from the systems and platforms your team uses to document pertinent information. We chose our most frequently accessed information repositories: Slack, Confluence, and our public documentation pages.
To seed the knowledge database, we built a Retool Workflow that scrapes data from those three systems. This workflow is designed to extract relevant information from these sources, chunk it into smaller, more manageable threads, and subsequently upload the threads to our Retool Database. (Chunking is a critical step. Dividing the data into threads enhances the searchability and accessibility of specific information within the knowledge database.)
Our next step was to generate embeddings of the threads and store those embeddings in the vector database. The OpenAI embeddings API accepts text as an input and returns a vector, so we used a Retool Workflow to iterate through each thread, shipping each thread to the embeddings API for vector representation. We then uploaded the vectors to our vector database. Building this workflow took less than an hour, largely because we didn’t need to fuss with complex database connections, and we could use visual programming concepts like the loop block to ship faster.
At this point, we’ve successfully built a knowledge base and seeded it with relevant threads that contain valuable information about our product. We’ll use this database to inform the enterprise search tool.
Building the enterprise search tool
Now that we’ve built the knowledge database, it’s time to construct the enterprise search tool. We built ours in Retool Workflows. The workflow accepts a technical question in the input block and returns an answer in the return block. There are three intermediate steps between the input and output:
- Perform a vector search for threads similar to the question
- Construct a prompt using the relevant threads
- Generate a completion , or response
To perform the vector search, we’ll convert the question into a vector using the OpenAI embeddings endpoint. We’ll then perform a vector similarity search against our knowledge database to identify the threads that closely align with or match the embedding.
Once we’ve retrieved the relevant threads, we can construct a prompt that contains an instruction for the ML model, the question itself, and the relevant context. Our prompt looked something like:
You are a Retool support agent whose goal is to resolve a customer’s question. The context supplied below are relevant links from our knowledge base.
Question: In the documentation of the S3 integration we noticed that CORS is not required. This potentially means that requests are made from the browser not the server-side. Could you please clarify how this integration works?
Links:
[
“You can connect to Amazon S3 or S3-compatible services by creating a resource … ”,
”Before you can create an S3 resource, you must update the S3 bucket …”,
]
After constructing the prompt, we’re ready to ship it to the OpenAI completion endpoint.
We now have a workflow capable of answering questions using our knowledge database! The workflow essentially functions as an API, accepting questions through a webhook and returning corresponding answers via a return block.
To enhance the user experience, the team created a Retool app as the frontend interface for this system. This enterprise search app, which we call “Eternal Tools,” is actively used by our sales and field engineering teams to quickly answer all sorts of technical questions.
A sample prompt and response in our "Eternal Tools" question-answering bot
Results
We’ve been happily surprised by just how helpful this tool has become. Since the bot combines the ML model’s original knowledge base with Retool-specific knowledge, it’s not just some glorified FAQ—it can answer questions our team has never even encountered before. For example, the ML model has an innate understanding of technologies such as Kubernetes, and combined with Retool-specific context, we’ve found it quite skilled at answering esoteric deployment questions.
Moreover, we moved our big rock—enhancing knowledge accessibility and scaling the field engineering team’s context. Building AI-powered enterprise search software has enabled us to shift away from the brittle system of relying largely on individual experts. Now, every team member can leverage a shared knowledge base and is empowered to answer difficult questions in seconds. And, most important, this ability to deliver accurate and more timely technical assistance helps us keep customers informed, supported, and satisfied.
AI has so much potential to make teams more collaborative and productive. If you’re interested in using Retool to build AI-powered tools, learn more or book a demo.
Happy building!
Special thanks to Anoop Kotha, Dave Leo, and David Dworsky who also worked on building our enterprise search support bot.
Top comments (0)