DEV Community

Cover image for How to Build an AI FAQ System with Strapi, LangChain & OpenAI
DENIS KURIA for Strapi

Posted on

How to Build an AI FAQ System with Strapi, LangChain & OpenAI

Introduction

Frequently Asked Questions (FAQs) offer users immediate access to answers for common queries. However, as the volume and complexity of inquiries grow, manual management of FAQs becomes unsupportable. This is where an AI-powered FAQ system comes in.

In this tutorial, you'll learn how to create an AI-driven FAQ system using Strapi, LangChain.js, and OpenAI. This system will allow users to pose queries related to Strapi CMS and receive accurate responses generated by a GPT model.

Prerequisites

To comfortably follow along with this tutorial, you need to have:

Setting Up the Project

You need to configure the data source, which, in this case, is Strapi. Then, obtain an OpenAI API key, initialize a React project, and finally install the required dependencies.

Configuring Strapi as the Source for Managing FAQ Data

Strapi provides a centralized data managing platform. This makes it easier to organize, update, and maintain the FAQ data. It also automatically generates a RESTful API for accessing the content stored in its database.

Install Strapi

If you don't have Strapi installed in your system, proceed to your terminal and run the following command:

npx create-strapi-app@latest my-project
Enter fullscreen mode Exit fullscreen mode

The above command will install Strapi into your system and launch the admin registration page on your browser.

Admin Registration
Fill in your credentials in order to access the Strapi dashboard.

Create a Collection Type

On the dashboard, under Content-Type Builder create a new collection type and name it FAQ.

New Collection Type
Then, add a question and an answer field to the FAQ collection. The question field should be of type text as it will be a plain text input. As for the answer field use Rich Text (Blocks) type as it allows formatted text.

New Field
Proceed to the Content Manager and add entries to the FAQ collection type. Each entry should have a FAQ question and its corresponding answer. Make sure you publish the entry. Create as many entries as you wish.

Entry

Expose Collection API

Now that you have the FAQ data in Strapi, you need to expose it via an API. This will allow the application you will create to consume it.

To achieve this, proceed to Settings > Users & Permissions Plugin > Roles > Public.

API
Click on Faq Under permissions, check find and findOne actions and save.

FindOne
This will allow us to retrieve our FAQ data via the http://localhost:1337/api/faqs endpoint. Here is how the data looks via a get request.

Sample Data
Strapi is now configured and the FAQ data is ready for use.

Obtaining the OpenAI API Key

  • Proceed to the OpenAI API website and create an account if you don't have one.
  • Then click on API keys.

OpenAI API Key

  • Create a new secret key. Once generated, copy and save the API key somewhere safe as you will not be able to view it again.

Initializing a React Project and Installing the Required Dependencies

This is the final step needed to complete setting up our project. Create a new directory in your preferred location and open it with an IDE like VS Code. Then run the following command on the terminal:

npx create-react-app faq-bot
Enter fullscreen mode Exit fullscreen mode

The command will create a new React.js application named faq-bot set up and ready to be developed further.

Then navigate to the faq-bot directory and run the following command to install all the dependencies you need to develop the FAQ AI application:

yarn add axios langchain @langchain/openai express cors
Enter fullscreen mode Exit fullscreen mode

If you don't have yarn installed, install it using this command:

npm install -g yarn
Enter fullscreen mode Exit fullscreen mode

You can use npm to install the dependencies, but during development, I found yarn to be better at handling any dependency conflict issues that occurred.

The dependencies will help you achieve the following:

  • axios: To fetch data from the Strapi CMS API and also to fetch responses from our Express server.
  • langchain: To implement the Retrieval Augmented Generation(RAG) part of the application.
  • @langchain/openai: To handle communication with the OpenAI API.
  • express: To create a simple server to serve the frontend.
  • cors: To ensure the server responds correctly to requests from different origins.

Creating the FAQ AI App Backend

The core of your FAQ system will reside in an Express.js server. It will leverage the RAG (Retriever Augmented Generation) approach.

RAG
RAG approach enhances the accuracy and richness of responses. It achieves this by combining information retrieval with large language models (LLMs) to provide more factually grounded answers. A retrieval locates relevant passages from external knowledge sources, such as FAQs stored in Strapi CMS. These passages, along with the user's query, are then fed into the LLM. By leveraging both internal knowledge and retrieved context, the LLM generates responses that are more informative and accurate.

The server will be responsible for managing incoming requests, retrieving FAQ data from Strapi, processing user queries, and utilizing RAG for generating AI-driven responses.

Importing the Necessary Modules and Setting Up the Server

At the root of your faq-bot project, create a file and name it server.mjs. The extension indicates that the JavaScript code is written in the ECMAScript module format. ECMAScript modules are a standard mechanism for modularizing JavaScript code.

Then open the server.mjs file and proceed to import the libraries we installed earlier and some specific ones from LangChain. Proceed to define the port on which the server will listen for incoming requests. Finally, configure the middleware functions to handle JSON parsing and CORS.

import express from "express";
import axios from "axios";
import dotenv from "dotenv";
import cors from "cors"; 
import { ChatOpenAI } from "@langchain/openai";
import { createStuffDocumentsChain } from "langchain/chains/combine_documents";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
import { OpenAIEmbeddings } from "@langchain/openai";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { createRetrievalChain } from "langchain/chains/retrieval";
import { createHistoryAwareRetriever } from "langchain/chains/history_aware_retriever";
import { MessagesPlaceholder } from "@langchain/core/prompts";
import { HumanMessage, AIMessage } from "@langchain/core/messages";
import { Document } from "langchain/document";

dotenv.config();

const app = express();
const PORT = process.env.PORT || 30080;

// Middleware to handle JSON requests
app.use(express.json());
app.use(cors()); // Add this line to enable CORS for all routes

Enter fullscreen mode Exit fullscreen mode

You will understand what each library does as we move on with the code.

The rest of the code in the "Creating the FAQ AI App Backend" section will reside in the same server.mjs file as the code above. The code in each subsection is a continuation of the code explained in the previous subsection.

Initializing the OpenAI Model

To interact with the OpenAI language model, you'll need to initialize it with your API key and desired settings.

// Instantiate Model
const model = new ChatOpenAI({
  modelName: "gpt-3.5-turbo",
  temperature: 0.7,
  openAIApiKey: process.env.OPENAI_API_KEY, 
});
Enter fullscreen mode Exit fullscreen mode

The API Key is stored as an environmental variable. Proceed to the root folder of your project and create a file named .env. Store your OpenAI API key there as follows:

OPENAI_API_KEY=Your API Key

Enter fullscreen mode Exit fullscreen mode

Temperature is a hyperparameter that controls the randomness of the model's output.

Fetching FAQ Data From Strapi

The system relies on pre-defined FAQ data stored in Strapi. Define a function to fetch this data using Axios and make a GET request to the Strapi API endpoint you configured earlier.

// Fetch FAQ data
const fetchData = async () => {
  try {
    const response = await axios.get("http://localhost:1337/api/faqs");
    return response.data;
  } catch (error) {
    console.error("Error fetching data:", error.message);
    return [];
  }
};
Enter fullscreen mode Exit fullscreen mode

After fetching the data, extract the questions and their corresponding answers.

const extractQuestionsAndAnswers = (data) => {
  return data.data.map((item) => {
    return {
      question: item.attributes.Question,
      answer: item.attributes.Answer[0].children[0].text,
    };
  });
};
Enter fullscreen mode Exit fullscreen mode

The above function maps through the data array and extract the question and answer attributes from each item.

Populating the Vector Store

To efficiently retrieve relevant answers, create a vector store containing embeddings of the FAQ documents.

// Populate Vector Store
const populateVectorStore = async () => {
  const data = await fetchData();
  const questionsAndAnswers = extractQuestionsAndAnswers(data);

  // Create documents from the FAQ data
  const docs = questionsAndAnswers.map(({ question, answer }) => {
    return new Document({ pageContent: `${question}\n${answer}`, metadata: { question } });
  });

  // Text Splitter
  const splitter = new RecursiveCharacterTextSplitter({ chunkSize: 100, chunkOverlap: 20 });
  const splitDocs = await splitter.splitDocuments(docs);

  // Instantiate Embeddings function
  const embeddings = new OpenAIEmbeddings();

  // Create the Vector Store
  const vectorstore = await MemoryVectorStore.fromDocuments(splitDocs, embeddings);
  return vectorstore;
};

Enter fullscreen mode Exit fullscreen mode

The above code uses the questions and answers data to create document objects. It then splits them into smaller chunks, computes embeddings, and constructs a vector store.

The vector store holds representations of the FAQ data, facilitating efficient retrieval and processing within the system.

Answering Questions From the Vector Store

Having the vector store full of information, you need a way to retrieve only the relevant information to a user query. Then use an LLM to come up with a good response to the query based on the retrieved information and the chat history.

To achieve this, you will implement a function to create a retriever, define prompts for AI interaction, and invoke a retrieval chain.

// Logic to answer from Vector Store
const answerFromVectorStore = async (chatHistory, input) => {
  const vectorstore = await populateVectorStore();

  // Create a retriever from vector store
  const retriever = vectorstore.asRetriever({ k: 4 });

  // Create a HistoryAwareRetriever which will be responsible for
  // generating a search query based on both the user input and
  // the chat history
  const retrieverPrompt = ChatPromptTemplate.fromMessages([
    new MessagesPlaceholder("chat_history"),
    ["user", "{input}"],
    [
      "user",
      "Given the above conversation, generate a search query to look up in order to get information relevant to the conversation",
    ],
  ]);

  // This chain will return a list of documents from the vector store
  const retrieverChain = await createHistoryAwareRetriever({
    llm: model,
    retriever,
    rephrasePrompt: retrieverPrompt,
  });

  // Define the prompt for the final chain
  const prompt = ChatPromptTemplate.fromMessages([
    [
      "system",
      `You are a Strapi CMS FAQs assistant. Your knowledge is limited to the information I  provide in the context. 
       You will answer this question based solely on this information: {context}. Do not make up your own answer .
       If the answer is not present in the information, you will respond 'I don't have that information.
       If a question is outside the context of Strapi, you will respond 'I can only help with Strapi related questions.`,
    ],
    new MessagesPlaceholder("chat_history"),
    ["user", "{input}"],
  ]);

  // the createStuffDocumentsChain
  const chain = await createStuffDocumentsChain({
    llm: model,
    prompt: prompt,
  });

  // Create the conversation chain, which will combine the retrieverChain
  // and combineStuffChain to get an answer
  const conversationChain = await createRetrievalChain({
    combineDocsChain: chain,
    retriever: retrieverChain,
  });

  // Get the response
  const response = await conversationChain.invoke({
    chat_history: chatHistory,
    input: input,
  });

  // Log the response to the server console
  console.log("Server response:", response);
  return response;
};

Enter fullscreen mode Exit fullscreen mode

The above code creates a retriever for search queries and configures a history-aware retriever. It then defines prompts for AI interaction, constructs a conversation chain, and invokes it with chat history and input. Finally, it logs and returns the generated response.

Handling Incoming Requests and Starting the Server

Now that you have everything for handling a user request ready, expose a POST endpoint /chat to handle incoming requests from clients. The route handler will parse input data, format the chat history, and pass it to the answerFromVectorStore function responsible for answering questions.

// Route to handle incoming requests
app.post("/chat", async (req, res) => {
  const { chatHistory, input } = req.body;

  // Convert the chatHistory to an array of HumanMessage and AIMessage objects
  const formattedChatHistory = chatHistory.map((message) => {
    if (message.role === "user") {
      return new HumanMessage(message.content);
    } else {
      return new AIMessage(message.content);
    }
  });

  const response = await answerFromVectorStore(formattedChatHistory, input);
  res.json(response);
});

// Start the server
app.listen(PORT, () => {
  console.log(`Server is running on http://localhost:${PORT}`);
});

Enter fullscreen mode Exit fullscreen mode

Run the following command on your terminal to start the server:

node server.mjs

Enter fullscreen mode Exit fullscreen mode

The server will run on the specified port.

Use Postman or any other software to test the server. Make sure the payload you send is in this format:

{
    "chatHistory": [
        {
            "role": "user",
            "content": "What is Strapi?"
        },
        {
            "role": "assistant",
            "content": "Strapi is an open-source headless CMS (Content Management System) "
        }
    ],
    "input": "Does Strapi have a default limit"
}
Enter fullscreen mode Exit fullscreen mode

You can change the content and input data to your liking. Below is a sample result after you make the post request:


"answer": "The default limit for records in the Strapi API is 100."

Enter fullscreen mode Exit fullscreen mode

That is the answer part of the response. But the response has a lot more data in it including the documents used to answer the question.

Creating the Frontend of Your System

Having the core part of your system completed. You need a user interface in which the users will interact with your system. Under src in your React app, create a ChatbotUI.js file and paste the following code:

import React, { useState, useEffect, useRef } from 'react';
import axios from 'axios';
import './ChatbotUI.css'; // Assuming the CSS file exists

const ChatbotUI = () => {
  const [chatHistory, setChatHistory] = useState([]);
  const [userInput, setUserInput] = useState('');
  const [isLoading, setIsLoading] = useState(false);
  const [error, setError] = useState(null);
  const [isExpanded, setIsExpanded] = useState(true); // State for chat window expansion
  const chatContainerRef = useRef(null);

  useEffect(() => {
    // Scroll to the bottom of the chat container when new messages are added
    if (isExpanded) {
      chatContainerRef.current.scrollTop = chatContainerRef.current.scrollHeight;
    }
  }, [chatHistory, isExpanded]);

  const handleUserInput = (e) => {
    setUserInput(e.target.value);
  };

  const handleSendMessage = async () => {
    if (userInput.trim() !== '') {
      const newMessage = { role: 'user', content: userInput };
      const updatedChatHistory = [...chatHistory, newMessage];
      setChatHistory(updatedChatHistory);
      setUserInput('');
      setIsLoading(true);

      try {
        const response = await axios.post('http://localhost:30080/chat', {
          chatHistory: updatedChatHistory,
          input: userInput,
        });
        const botMessage = {
          role: 'assistant',
          content: response.data.answer,
        };
        setChatHistory([...updatedChatHistory, botMessage]);
      } catch (error) {
        console.error('Error sending message:', error);
        setError('Error sending message. Please try again later.');
      } finally {
        setIsLoading(false);
      }
    }
  };

  const toggleChatWindow = () => {
    setIsExpanded(!isExpanded);
  };

  return (
    <div className="chatbot-container">
      <button className="toggle-button" onClick={toggleChatWindow}>
        {isExpanded ? 'Collapse Chat' : 'Expand Chat'}
      </button>
      {isExpanded && (
        <div className="chat-container" ref={chatContainerRef}>
          {chatHistory.map((message, index) => (
            <div
              key={index}
              className={`message-container ${
                message.role === 'user' ? 'user-message' : 'bot-message'
              }`}
            >
              <div
                className={`message-bubble ${
                  message.role === 'user' ? 'user-bubble' : 'bot-bubble'
                }`}
              >
                <div className="message-content">{message.content}</div>
              </div>
            </div>
          ))}
          {error && <div className="error-message">{error}</div>}
        </div>
      )}
      <div className="input-container">
        <input
          type="text"
          placeholder="Type your message..."
          value={userInput}
          onChange={handleUserInput}
          onKeyPress={(e) => {
            if (e.key === 'Enter') {
              handleSendMessage();
            }
          }}
          disabled={isLoading}
        />
        <button onClick={handleSendMessage} disabled={isLoading}>
          {isLoading ? 'Loading...' : 'Send'}
        </button>
      </div>
    </div>
  );
};

export default ChatbotUI;

Enter fullscreen mode Exit fullscreen mode

The above code creates a user interface for interacting with the AI-powered FAQ system hosted on the server. It allows users to send messages, view chat history, and receive responses from the server. It also maintains a state for chat history, user input, loading status, and error handling. When a user sends a message, the component sends an HTTP POST request to the server's /chat endpoint, passing along the updated chat history and user input. Upon receiving a response from the server, it updates the chat history with the bot's message.

Create another file under src directory and name it ChatbotUI.css and paste the following code. This code will be responsible for styling the user interface.

.chatbot-container {
    display: flex;
    flex-direction: column;
    background-color: #f5f5f5; 
    padding: 5px; 
    position: fixed; 
    bottom: 10px;  
    right: 10px;  
    width: 300px;  
    z-index: 10;  
  }

  .toggle-button {
    padding: 5px 10px;
    background-color: #ddd; 
    border: 1px solid #ccc;
    border-radius: 5px;
    cursor: pointer;
    margin-bottom: 5px; 
  }

  .chat-container {
    height: 300px;
    overflow-y: auto;
  }

  .message-container {
    display: flex;
    justify-content: flex-start;
    margin-bottom: 5px; /* Reduced margin for tighter spacing */
  }

  .message-bubble {
    max-width: 70%;
    padding: 5px; /* Reduced padding for smaller bubbles */
    border-radius: 10px;
  }

  .user-bubble {
    background-color: #007bff;
    color: white;
  }

  .bot-bubble {
    background-color: #f0f0f0;
    color: black;
  }

  .input-container {
    align-self: flex-end;
    display: flex;
    align-items: center;
    padding: 5px; 
  }

  .input-container input {
    flex: 1;
    padding: 5px; 
    border: 1px solid #ccc;
    border-radius: 5px;
    margin-right: 10px;
  }

  .input-container button {
    padding: 10px 20px;
    background-color: #007bff;
    color: white;
    border: none;
    border-radius: 5px;
    cursor: pointer;
  }

Enter fullscreen mode Exit fullscreen mode

The above code defines the layout and styling for the user interface. It positions the chat interface fixed at the bottom right corner of the screen, styles message bubbles, and formats the input field and send button for user interaction.

In the App.js file render the user interface.

import React from 'react';
import ChatbotUI from './ChatbotUI';

const App = () => {
  return (
    <div>
      <ChatbotUI />
    </div>
  );
};

export default App;
Enter fullscreen mode Exit fullscreen mode

You are now done creating the FAQ AI-powered system.

Open a new terminal in the same path you run your server and start your react app using the following command:

yarn start
Enter fullscreen mode Exit fullscreen mode

You can now start asking the system FAQs about Strapi CMS. The system knowledge depends on the FAQ data you have stored in Strapi.

Testing the System

The following GIF shows how the system responds:

Results
When asked about a topic outside Strapi, it reminds the user it only deals with Strapi CMS. Also if an answer is not present in the FAQ data stored in Strapi CMS, it responds it does not have that information.

Conclusion

Congratulations on creating an AI & Strapi-powered FAQ system. In this tutorial, you've learned how to leverage the strengths of Strapi, LangChain.js, and OpenAI.

The system integrates seamlessly with Strapi, allowing you to effortlessly manage your FAQ data through a centralized platform. LangChain.js facilitates Retrieval Augmented Generation (RAG), enhancing the accuracy and comprehensiveness of the system's responses. OpenAI provides the large language model that the system uses to generate informative and relevant answers to user queries.

Resources

Top comments (0)