DEV Community

Cover image for How to Build and Integrate a React Chatbot with LLMs: A React ChatBotify Guide (Part 4)
tjtanjin
tjtanjin

Posted on • Updated on • Originally published at tjtanjin.Medium

How to Build and Integrate a React Chatbot with LLMs: A React ChatBotify Guide (Part 4)

Introduction

Demo GIF

Welcome back to the fourth installment of the React ChatBotify series! In the rapidly evolving landscape of technology, Large Language Models (LLMs) have become a popular topic today and it should come as no surprise that we are witnessing increased adoption of such models in everyday services. In this tutorial, we will explore the integration with Gemini and dive into how we can build a chatbot empowered by LLMs!

A Quick Thought

In the previous tutorial, we briefly took a look at how we can build an effective FAQ bot to address commonly asked questions. However, we also encountered issues such as hard-coded responses and the inability to respond to unforeseen queries.

As we embark on the fourth part of our tutorial series, we will shift our focus to how we can integrate our chatbot with LLMs to provide more dynamic responses. In the upcoming fifth and final installation, we will further delve into the application of Retrieval Augmented Generation (RAG). This step will fully transition our FAQ bot into an LLM empowered solution, enriched with contextual information and capable of answering questions with a personality!

It is important to note that this segment assumes you already have a React ChatBotify chatbot setup. If you have not, then do visit this guide first.

Gemini AI

Gemini Logo

Amongst the many emerging LLM models, ChatGPT and Gemini are highly spoken about. For this tutorial, I will be using Gemini as an example since it generously provides free API usage. To the credit of OpenAI, ChatGPT also offers generously low-cost APIs (which I use myself!) so don't hesitate to give them a try in your free time.

All that said, before proceeding with the rest of this tutorial, please go ahead and obtain your Gemini API key by following the instructions on the Gemini website. If you're stuck and require assistance, feel free to reach out.

The Basic Bot

Armed with your API key, let us quickly get a basic bot up and running. In fact, if you already have the chatbot setup from part 2 of this series, then you can easily build off it! However, for the purpose of making this tutorial complete, let's assume we have a clean setup with a bot that greets the user! This can be achieved with the following code snippet:

// MyChatBot.js
import React from "react";
import ChatBot from "react-chatbotify";

const MyChatBot = () => {
  const flow = {
    start: {
      message: "Hello, I will become sentient soon!"
    }
  }
return (
    <ChatBot/>
  );
};
export default MyChatBot;
Enter fullscreen mode Exit fullscreen mode

At this point, we have a very excited chatbot that unfortunately does nothing more than wanting to become sentient. Let's make its wish come true!

Initialize Model

The following is a code snippet from Gemini's quickstart guide as of writing this article:

import { GoogleGenerativeAI } from "@google/generative-ai";

// Access your API key (see "Set up your API key" above)
const genAI = new GoogleGenerativeAI(API_KEY);

async function run() {
  // For text-only input, use the gemini-pro model
  const model = genAI.getGenerativeModel({ model: "gemini-pro"});

  const prompt = "Write a story about a magic backpack."

  const result = await model.generateContent(prompt);
  const response = await result.response;
  const text = response.text();
  console.log(text);
}

run();
Enter fullscreen mode Exit fullscreen mode

We will go ahead to copy and paste this snippet within our own chatbot above and replace the API_KEY with what you have obtained earlier. The result would look like this:

// MyChatBot.js
import React from "react";
import ChatBot from "react-chatbotify";

const MyChatBot = () => {
  const genAI = new GoogleGenerativeAI("YOUR_API_KEY");
  async function run() {
    // For text-only input, use the gemini-pro model
    const model = genAI.getGenerativeModel({ model: "gemini-pro"});

    const prompt = "Write a story about a magic backpack."

    const result = await model.generateContent(prompt);
    const response = await result.response;
    const text = response.text();
    console.log(text);
  }

  const flow = {
    start: {
      message: "Hello, I will become sentient soon!"
    }
  }
return (
    <ChatBot/>
  );
};
export default MyChatBot;
Enter fullscreen mode Exit fullscreen mode

We're getting somewhere, but the model hasn't been integrated with our chatbot! In order to achieve that, we are going to make the following changes:

  • Add another looping block to our conversation flow that will call the run function to handle interactions with the model
  • Modify the run function to take in a prompt parameter that is provided by the user, remove the hard-coded prompt from the run function and return the text generated
// MyChatBot.js
import React from "react";
import ChatBot from "react-chatbotify";

const MyChatBot = () => {
  const genAI = new GoogleGenerativeAI("YOUR_API_KEY");
  async function run(prompt) {
    // For text-only input, use the gemini-pro model
    const model = genAI.getGenerativeModel({ model: "gemini-pro"});

    const result = await model.generateContent(prompt);
    const response = await result.response;
    const text = response.text();
    return text;
  }

  const flow = {
    start: {
      message: "Hello, I am sentient now, talk to me!",
      path: "model_loop",
    },
    model_loop: {
      message: async (params) => {
        return await run(params.userInput);
      },
      path: "model_loop"
    },
  }
return (
    <ChatBot/>
  );
};
export default MyChatBot;
Enter fullscreen mode Exit fullscreen mode

At this point, you have an initial working integration! If you would like to visualize the above code in action, you may copy and paste the above snippet into the playground (remember to provide your API key).

The current integration with Gemini is cool and all, but you may have noticed that the chatbot is showing you the return result in entire blocks within messages. This may be fine for short messages, but can get a little problematic for long paragraphs. After all, it's not great to have your users wait forever! Wouldn't it be nice if we could show users parts of the response first?

Stream Responses

The streaming of messages is a feature newly introduced in version 1.3.0 of React ChatBotify. This is particularly useful for integrations with LLMs where responses may be streamed in parts. Let's take a look at how we may modify our current approach to support streaming responses.

Referring back once again to the Gemini's quickstart guide, the following snippet is presented for streaming to support faster interactions:

const result = await model.generateContentStream([prompt, ...imageParts]);

let text = '';
for await (const chunk of result.stream) {
  const chunkText = chunk.text();
  console.log(chunkText);
  text += chunkText;
}
Enter fullscreen mode Exit fullscreen mode

We will thus make the following modifications to our code to utilize this new approach:

  • Add a second streamMessage parameter (provided via params) to the run function
  • Replace model.generateContent with model.generateContentStream
  • Add a for-loop to iterate through the stream response chunks and call params.streamMessage with the currently completed text to stream the message into the chatbot
  • Modify the model-loop block to include params.streamMessage as a second parameter to the run function
// MyChatBot.js
import React from "react";
import ChatBot from "react-chatbotify";

const MyChatBot = () => {
  const genAI = new GoogleGenerativeAI("YOUR_API_KEY");
  async function run(prompt, streamMessage) {
    // For text-only input, use the gemini-pro model
    const model = genAI.getGenerativeModel({ model: "gemini-pro"});

    const result = await model.generateContentStream(prompt);
    let text = '';
    for await (const chunk of result.stream) {
      const chunkText = chunk.text();
      text += chunkText;
      streamMessage(text);
    }
    return text;
  }

  const flow = {
    start: {
      message: "Hello, I am sentient now, talk to me!",
      path: "model_loop",
    },
    model_loop: {
      message: async (params) => {
        return await run(params.userInput, params.streamMessage);
      },
      path: "model_loop"
    },
  }
return (
    <ChatBot/>
  );
};
export default MyChatBot;
Enter fullscreen mode Exit fullscreen mode

Go ahead and give the modified code snippet above a run in the playground. We've done everything right but it still appears a bit strange, isn't it? The messages are no longer sent as a single block, but it's still coming out in chunked parts instead of appearing character by character.

We are seeing the above behavior because the stream response is providing us text in chunks. If we want to have the text show up character by character, we can manually handle the stream logic. This is slightly more involved and we won't be covering it in this guide but you may refer to the live example that I have provided here.

Note that for demonstration in this tutorial, the API keys are directly included in the chatbot. While convenient, it is generally bad practice to expose your API keys on client side. Instead, a best practice would be to employ a server-side component to manage API calls and have your client communicate with your server-side component instead.

Simulated Stream Responses

Recognizing the aesthetics and popularity of streaming responses, React ChatBotify also provides simStream and streamSpeed options for developers who would like to simulate streaming of text response to users. If you are interested in this option, you may refer to the example found here.

Conclusion

In this tutorial, we've explored the exciting realm of integrating LLMs with React ChatBotify. Using Gemini as an example, we have seen how easily we can empower our chatbots with the ability to provide dynamic responses.

As we conclude the fourth installment, the path ahead promises even more innovation. In the upcoming final segment, we'll explore Retrieval Augmented Generation (RAG) to infuse our chatbot with personality and contextual awareness, elevating it to a truly engaging conversational partner. Thank you for reading, and look out for more content!

Top comments (0)