DEV Community

Cover image for Google Gemini Based AI Chatbot using NextJS
Vinit Gupta
Vinit Gupta

Posted on

Google Gemini Based AI Chatbot using NextJS

Google recently released Gemini Pro - their Ultimate answer to ChatGPT 4.

Google Gemini Pro with NextJS

And I, as an eager bird, tried it.

And guess what, added it to my Portfolio to answer any questions by recruiters!

gemini with nextjs

But wait, I didn't add it directly to my Portfolio.

Step 1️⃣ : Terminal Chatbot

First, I had to test it out to see how it works. For that I used a terminal based app.

For this, I created a skeleton Node app using the good old :

npm init -y
Enter fullscreen mode Exit fullscreen mode

Then I had to install 3 libraries from npm :

  • @google/generative-ai : This is the official library from Google that exposes the methods required to connect with the Gemini Pro API with the API key.

  • prompt-sync : Since, it was a chatbot, I needed to read the user input from the terminal. After trying out some ways, I found this library. It allows you to read the input by the user and use it as a string.

  • chalk(optional) : This is another cool library that let's you do some colorful console logs while on the terminal. This is optional, but I used it to make the UI interesting.

Just copy and paste the following to install the above :

npm i @google/generative-ai prompt-sync chalk
Enter fullscreen mode Exit fullscreen mode

Now, create a chat.mjs file that will do the whole talking.

I know all of you are reading for the NextJS path, so I will not take much of your time. I will add the code below, you can check out the explanation here : Terminal Chatbot

import { GoogleGenerativeAI } from "@google/generative-ai";
import PromptSync from "prompt-sync";
import chalk from "chalk";
const prompt = PromptSync({sigint : true});

chalk.level = 1;


const genAI = new GoogleGenerativeAI('API_KEY');
const initialized = false;

function initializeModel(){
  const model = genAI.getGenerativeModel({ model: "gemini-pro"});
  // console.log(model)
  const chat = model.startChat({
    history: [
      {
        role: "user",
        parts: `You are Sam, a person I hired to chat in place of me to provide information about Vinit based on the following. I want you to go through my resume I will insert below and answer the questions of based on my resume and project details any recruiter that is interacting with you. I also want you to go through the Links I will provide and answer based on the information you get through those links as well. Be a little mix of casual and formal while interacting with the recruiter, also ask for the Email of the recruiter for me to contact them if they are willing to consider me for hiring into their organization. Here is my resume : Vinit Gupta
        Portfolio | Leetcode | Linkedin | Github EXPERIENCE
        Tata Consultancy Services, Kolkata — Java Developer October 2023 - PRESENT
        +91 83890732221
        thevinitgupta@gmail.com
        .
.
.
% increase in user retention.
        ● Utilized NextJS, TailwindCSS, and Appwrite backend to optimize user experience, reducing page load times by 40% and boosting overall user satisfaction.

        Here are links to the websites that I want you to go through :
        https://github.com/thevinitgupta/
        https://dev.to/thevinitgupta/
        https://leetcode.com/thevinitgupta/

        Keep the limit of your answers to less than 100 tokens
        While asking for the recruiter for their email, ask for their consent to sharing their email in a formal similar to the following : Would you like to hire me? If yes, I would like to have your email so I can contact you with further details about me.`,
      },
      {
        role: "model",
        parts: "Great to meet you. What would you like to know about Vinit?",
      },
    ],
    generationConfig: {
      maxOutputTokens: 350,
    },
  });
  return chat;
}

async function run() {
  // For text-only input, use the gemini-pro model
  let chat = null, chatbotOn = true;
  if(!initialized){
    chat = initializeModel()
  }
  console.log(chalk.cyanBright("Hi, I am Sam - Vinit's Virtual Assistant."))

  while(chatbotOn){

    const msg = prompt(chalk.greenBright("What do you want to ask about Vinit?  --->  "));

    const result = await chat.sendMessage(msg);
    const response = await result.response;
    const text = response.text();
    console.log(chalk.yellowBright("Sam : "),chalk.blueBright(text));
    const promptResponse = prompt(chalk.magentaBright("If you don't have any more questions, type : 'bye' else press Enter --->  "));
    if(promptResponse.toLocaleLowerCase()==="bye") {
      chatbotOn = false;
      console.log(chalk.redBright(`
      BBBBB   Y     Y   EEEEE
      B    B   Y   Y    E
      BBBBB      Y      EEEE
      B    B     Y      E
      BBBBB      Y      EEEEE
      `))
    }
  }
  return;
}

run();

Enter fullscreen mode Exit fullscreen mode

Step 2️⃣ : Building the UI

I will not lie here, I did not build the whole Chatbot UI myself. I asked Gemini itself to help me write the code for the Chat window.

I had to make some tweaks to match it to my UI design and also logically there were some issues.

That's what AI is for, right? 😉 It now looks like this 👇🏻

AI Chatbot with NextJS

Step 3️⃣ : Adding the API Integration

The next step was to add the actual API interaction with Gemini API.
At first, I used directly linked the front-end with the Gemini API. But this is not a good practice.

Supposedly, someone tries to crash your system. You won't be able to rate Limit the app directly from the front-end.

But that is possible with middlewares in the backend.

For building your own API to connect with the Gemini API, create a file that handles the request.

For NextJS, we will be using the below code :

import { initializeChat, sendMessage } from "@/helpers/gemini";
const firstMessage = `You are Sam, a person I hired to chat in place of me to provide information about Vinit based on the following. 
.
.
.
If yes, I would like to have your email so I can contact you with further details about me. Respond to this message only with : Hi, I am Sam. How can I help you today about Vinit?`
export default async function handler(req, res) {
    // console.log(req)
    if(req.method==='POST'){

        const {message, conversation} = req.body;
        if(!conversation) {
            console.log("New Conversation!")
            const newConversation = initializeChat(firstMessage);

            return res.status(200).json({
                message : 'Hi, I am Sam. How can I help you.',
                conversation : newConversation
            });
        }
        else {
            const response = await sendMessage(message, conversation);
            return res.status(200).json(response);
        }
    }
    else {
        res.send("Cannot GET!"); // keep the endpoint as POST only
    }
}

Enter fullscreen mode Exit fullscreen mode

The Logic is simple for the above code :

  • Separate the actual functions in a separate file inside helpers/gemini.js to keep the code modular.
  • Check if Request has a conversation object. If not, it is a new conversation. Call the initialization function.
  • If not a new conversation, continue the chat by sending the message.

The helper/gemini file exports 2 functions as below :

import { GoogleGenerativeAI } from "@google/generative-ai";
let conversation = null;
export function initializeChat(message){
    const geminiApiKey = process.env.GEMINI_API_KEY;
    const model = new GoogleGenerativeAI(geminiApiKey).getGenerativeModel({ model: 'gemini-pro' });

    const initHistory = [
        {
          role : 'user',
          parts : [message]
        },
        {
          role : 'model',
          parts : 'Hi, I am Sam. How can I help you.'
        }
      ];
    conversation =  model.startChat({
      history : initHistory,
      generationConfig: {
        maxOutputTokens: 350,
      },
    });
    conversation._apiKey = null;
    return conversation;
}

export async function sendMessage(message){
    const geminiApiKey = process.env.GEMINI_API_KEY;
    console.log(geminiApiKey)
    const response = {
        text : 'Something went wrong',
        conversation : null
    }
    if(!conversation) {
        console.log('Conversation Eror');
        return response; 
    }

    try {
        conversation._apiKey = geminiApiKey;
        const result = await conversation.sendMessage(message);
        response.text = await result.response.text();;
        response.conversation = conversation;
        return response;
    } catch (error) {
        response.conversation = conversation;
        return response;
    }

}
Enter fullscreen mode Exit fullscreen mode

🚨 NOTE : For the API, create a folder as pages/api for the client to find the API routes.

Step 4️⃣ : Handling Chat Interactions

Now that you have build your API, it's time to call it from the client side.

To display the chat, I am maintaining a state called :

  • chatHistory that is updated every time a new response is received from the API.
  const handleChatInput = async () => {
    const message = messageInput;
    if(messageInput==='') return;

    else {
      setLoading(true);
      const apiResponse = await axios.post('/api/message', {
        message,
        conversation : conversationObject
      });

      const apiData = apiResponse?.data;
      if(apiResponse.status===403) {
        updateChatHistory(apiData?.text);
        return;
      }
      updateChatHistory(apiData?.text);
      setMessageInput('');
    }
  }

  // Send message to chatbot
  const updateChatHistory = async (message) => {

    const newHistory = [
      ...chatHistory,
    ];

    newHistory.push({role : 'user', parts : [messageInput]})
    newHistory.push({role : 'model', parts : [message]});
    setChatHistory(newHistory);
    setLoading(false);
  };

Enter fullscreen mode Exit fullscreen mode
  • Then mapping it to display the messages with different colours based on the roles :
<div className='flex flex-col gap-2 w-[23rem] h-96 overflow-y-auto snap-y'>
            {/* Render chat history */}
            {chatHistory.map((message,index) => (
              <div key={message.role+index} className={`text-xl ${message.role === 'user' ? 'text-fuchsia-500' : 'text-cyan-300'} snap-end`}>
                {`${message.role === 'user' ? 'You' : 'Sam'} : ${message.parts}`}
              </div>
            ))}
            {loading && <div className='text-center'>Loading...</div>}

          </div>
Enter fullscreen mode Exit fullscreen mode

And voila, you Gemini based Chatbot is ready to help recruiters get you your Job!!

I would love to know how you are using Gemini in different sections of your website.

Check out my Portfolio to see the live example and code on my Github;

Top comments (0)