DEV Community

Cover image for Build your custom chatbot using chatgpt
Vishnu Sivan
Vishnu Sivan

Posted on

Build your custom chatbot using chatgpt

ChatGPT is the most powerful language model ever built. It is an amazing tool that can be used in various ways to enhance your productivity and learning. ChatGPT can supports for quick answers to trivial questions or in-depth explanations of complex concepts. ChatGPT has become a trending AI tool in just a matter of weeks. It hit 100 million monthly active users in last January, according to data from Similarweb.

According to Drift’s 2021 State of Conversational Marketing report, 74% of companies want to implement conversational Artificial Intelligence (AI) tools to streamline workflow. It is clear that it has wide range of applications from customer service to virtual assistants. ChatGPT is a well-suited example as it can generate human-like responses to user inputs.

In this article, we will learn more about ChatGPT and build a custom chatbot using python and react frameworks.

Getting Started

Table of contents

  • What is ChatGPT
  • How to use ChatGPT
  • GPT3 integration in Python
  • Gradio based frontend creation
  • Build a custom chatbot
  • Applications of ChatGPT

What is ChatGPT

ChatGPT was created by OpenAI, a research company co-founded by Elon Musk and Sam Altman. The company launched ChatGPT on Nov. 30, 2022. It is one of the most sophisticated language processing AI models with 175 billion parameters.

ChatGPT can generate anything with a language structure, which includes answering questions, writing essays and poetry, translating languages, summarizing long texts, and even writing code snippets!

ChatGPT uses state of the art AI techniques to understand and generate human-like responses to a wide range of questions and prompts. It uses a combination of natural language processing techniques and machine learning algorithms to understand the meaning behind words and sentences and to generate responses that are appropriate to the context of the conversation. It has been trained on a massive corpus of text data, including articles, books, and websites, enabling it to provide precise and insightful answers.

How to use ChatGPT

You can access ChatGPT by visiting chat.openai.com, creating an OpenAI account and agreeing to chatGPT terms.

Create an OpenAI account

Go to chat.openai.com and register for an account with an email address. You need to create an account on the OpenAI website to log in and access ChatGPT.
login

Accept ChatGPT terms

Once you logged into your OpenAI account on the website, go through the terms and conditions for ChatGPT and click on Next. Click on Done to finish it.
terms and conditions

Start writing prompts

ChatGPT is ready to use now. You can type in any prompts in the textbox at the bottom of the page and press enter to submit it. The AI chatbot will then provide helpful answers to your prompts.
chatgpt
You can put any of the example prompts mentioned in the site or your own one to get the answer. The result for Explain quantum computing in simple terms looks like below,

prompting example 1
prompting example 2

GPT3 integration in Python

The OpenAI Python library provides convenient access to the OpenAI API from applications written in the Python language.

Follow the below steps to integrate GPT3 in python.

  • An API token is required to access OpenAI’s services.
  • Open platform.openai.com.
  • Click on your name or icon option which is located on the top right corner of the page and select “API Keys” or click on the link — Account API Keys — OpenAI API
  • Click on create new secret key button to create a new openai key.
    api key

  • Create and switch to a virtual environment before installing dependencies for openai and other libraries.

python -m venv venv
source venv/bin/activate  # for linux
venv\Scripts\activate     # for windows
Enter fullscreen mode Exit fullscreen mode
  • Install the OpenAI Python library using pip.
pip install openai
Enter fullscreen mode Exit fullscreen mode

openai installation

  • Create a new python file chatgpt_demo.py and add the following code to it.
import openai
openai.api_key = "" # your token goes here

def get_model_reply(query, context=[]):
    # combines the new question with a previous context
    context += [query]

    # given the most recent context (4096 characters)
    # continue the text up to 2048 tokens ~ 8192 charaters
    completion = openai.Completion.create(
        engine='text-davinci-002',
        prompt='\n\n'.join(context)[:4096],
        max_tokens = 2048,
        temperature = 0.4, # Lower values make responses more deterministic
    )

    # append response to context
    response = completion.choices[0].text.strip('\n')
    context += [response]

    # list of (user, bot) responses. We will use this format later
    responses = [(u,b) for u,b in zip(context[::2], context[1::2])]

    return responses, context

query = 'Which is the largest country by area in the world?'
responses, context = get_model_reply(query, context=[])

print('<USER> ' + responses[-1][0])
print('<BOT> ' + responses[-1][1])
Enter fullscreen mode Exit fullscreen mode
  • Run the script using the following command,
python chatgpt_demo.py
Enter fullscreen mode Exit fullscreen mode

run
openai.Completion.create() gives you the GPT3 response based on your query. In the above example, we asked it to explain what quantum computing in single terms. It was able to come up with a sharp answer to the query.

Gradio based frontend creation

Gradio is an open-source Python library used to build AI related web applications. It allows developers to create user-friendly and customizable interfaces.

Follow the below steps to create gradio based frontend for displaying the content.

  • Install the gradio python library using pip.
pip install gradio
Enter fullscreen mode Exit fullscreen mode
  • Add the following code to the chatgpt_demo.py file.
import gradio as gr

# defines a basic dialog interface using Gradio
with gr.Blocks() as dialog_app:
    chatbot = gr.Chatbot()
    state = gr.State([])

    with gr.Row():
        txt = gr.Textbox(
            show_label=False, 
            placeholder="Enter text and press enter"
        ).style(container=False)

    txt.submit(get_model_reply, [txt, state], [chatbot, state])

# launches the app in a new local port
dialog_app.launch()
Enter fullscreen mode Exit fullscreen mode
  • Run the app using the following command,
python chatgpt_demo.py
Enter fullscreen mode Exit fullscreen mode

run

Gradio will open a new port and launch an interactive app when dialog_app.launch() is executed. Running the examples provided in the previous section should return the same expected results. Messages entered by the user will appear on the right-hand side, while messages generated by OpenAI will be displayed on the left-hand side.
output

Build a custom chatbot

In this section, we will create a custom chatbot using GPT3 with python as the backend and react as the frontend.

1.1 Create a data file

The first step is to create a data file which contains the information about a specific topic. In this example, we use a quantum physics related content. Scrap the data from the link — scienceexchange quantum-physics and add it inside a text file named data.txt under the data folder.

Ensure that you have added the below text as a prefix in the data.txt file.

I am VBot, your personal assistant. I can answer questions based on general knowledge.

Answer questions based on the passage given below.
The final data file looks as below,
Enter fullscreen mode Exit fullscreen mode

1.2 Create the environment

Create and switch to a virtual environment before installing dependencies for openai and other libraries.

python -m venv venv
source venv/bin/activate  # for linux
venv\Scripts\activate     # for windows
Enter fullscreen mode Exit fullscreen mode

1.3 Install dependencies

Install the openai, fastapi, uvicorn, itsdangerous and python-dotenv libraries using pip.

pip install openai fastapi uvicorn itsdangerous python-dotenv
Enter fullscreen mode Exit fullscreen mode

1.4 Create environment (.env) file

Create a file named .env and add environment variables for openai api key and secret key to it.

export OPENAI_API_KEY="openai-key"
export SECRET_KEY="32-length-random-text" 
Enter fullscreen mode Exit fullscreen mode

1.5 Openai GPT3 backend integration

Create a new file main.py and add the following code to integrate the Openai GPT3 model for your query responses.

import openai
import os
from dotenv import load_dotenv
import os 

load_dotenv(".env")
openai.api_key = os.getenv("OPENAI_API_KEY")

def read_file(file):
    content = ""
    f = open(file, 'r')
    Lines = f.readlines()
    for line in Lines:
        content = content + " " + line.strip()
    return content

session_prompt = read_file("datas/data.txt")
restart_sequence = "\n\nUser:"
start_sequence = "\nVBot:"

def answer(ques, chat_log = None):
    max_try = 1
    try_count = 0
    while True:
        try:
            prompt_text = f'{chat_log}{restart_sequence} {ques}{start_sequence}'
            print(prompt_text)
            response = openai.Completion.create(
                model = "text-davinci-002",
                prompt = prompt_text,
                temperature = 0.8,
                max_tokens = 500,
                top_p = 1,
                frequency_penalty = 0.0,
                presence_penalty = 0.6,
                stop = ["User:", "VBot:"]
            ) 
            # print(response)
            ans = response['choices'][0]['text']
            return str(ans)
        except:
            try_count = try_count + 1
            if(try_count >= max_try): 
                return 'GTP3 error'
            print('Error')

def checkViolation(ans):
    response = openai.Moderation.create(input=ans)
    output = response["results"][0]["flagged"]
    return output

def gpt3_logs(question, answer, chat_log=None):
    if chat_log is None:
        chat_log = session_prompt
    return f'{chat_log}{restart_sequence} {question}{start_sequence}{answer}'

def message_check(message, chat_log):
    flag_user = checkViolation(message)
    if(not flag_user):
        ans = answer(message,chat_log)
        flag_bot = checkViolation(ans)
        if(flag_bot):
            ans = "My response violates OpenAI's Content Policy."
    else:
        ans = "Your message violates OpenAI's Content Policy."
    return ans

def main(msg,chat):
    ans = message_check(msg,chat)
    print("VBot: ", str(ans))
    return ans

if __name__ == "__main__":
    ans = main("What is your name",chat=None)
Enter fullscreen mode Exit fullscreen mode

Understand the code:

  • Load the environment variables and set the openai.api_key.
  • Create a method read_file(file) to read the contents from the text file.
  • Initialize session_prompt variable with the data file content using the read_file() method. Also, initialize start_sequence and restart_sequence values as “\nVBot:” and “\n\nUser:”
  • Create a method answer(ques, chat_log). It takes questions and chat log as arguments and provides response from the openai with the help of openai.Completion.create() method.
  • message_check() and checkViolation() methods are responsible for verifying the openai requests and responses using the openai.Moderation.create() method.
  • gpt3_logs method specifies the openai requests and responses in a predefined format.

1.6 Create and integrate a fastapi server

Create an api server to make the gpt3 integration available via http requests. For which, fastapi is selected to create apis.

Create a file named app.py and add the following code to it.

from fastapi import FastAPI, Request
from fastapi.middleware.cors import CORSMiddleware
from starlette.middleware.sessions import SessionMiddleware
import uvicorn
import os
from dotenv import load_dotenv
from main import gpt3_logs, main

app = FastAPI()

load_dotenv(".env")
secret_key_from_env = os.getenv("SECRET_KEY")

app.add_middleware(
    SessionMiddleware,
    secret_key = secret_key_from_env
)

app.add_middleware(
    CORSMiddleware,
    allow_origins = ["*"],
    allow_credentials = True,
    allow_methods = ["*"],
    allow_headers = ["*"],
)

@app.get("/")
def home():
    return "Hit from VBot"

@app.get("/api/response")
async def get_response(message: str, request: Request):
    chat_log = request.session.get('chat_log')
    if(chat_log == None):
        request.session['chat_log'] = gpt3_logs('', '', chat_log)
        chat_log = request.session.get('chat_log')
    response =  main(message,chat_log)
    if(len(response)!=0):
        request.session['chat_log'] = gpt3_logs(message, response, chat_log)
        return response
    else: 
        return "Oops! Something went wrong"

if __name__ == "__main__":
    uvicorn.run("app:app",port = 8000,reload=True)
Enter fullscreen mode Exit fullscreen mode

Understand the code:

  • Import fastapi library methods and middlewares. Create a basic fastapi object app as app = FastAPI().
  • Load the environment file and add the secret key inside the fastapi middleware. Configure the middleware with CORS.
  • Create a get route “/” which will send a response as “Hit from VBot”.
  • Create a get route “/api/response” which will receive a request from the user and send it to the main() method and send the response back to the user if the response is non-empty.
  • Run the uvicorn server using uvicorn.run(“app:app”,port = 8000,reload=True).

1.7 Run backend server

  • Run the backend python server using the following command.
python app.py
Enter fullscreen mode Exit fullscreen mode

run

2.1 Create the frontend app

Create a react application using create-react-app cli.

npx create-react-app chatbot-frontend
Enter fullscreen mode Exit fullscreen mode

2.2 Install Tailwind CSS

  • Install tailwindcss and generate tailwind.config.js file using npm.
npm install -D tailwindcss
npx tailwindcss init
Enter fullscreen mode Exit fullscreen mode
  • Add the following code to the tailwind.config.js file.
/** @type {import('tailwindcss').Config} */
module.exports = {
  content: [
    "./src/**/*.{js,jsx,ts,tsx}",
  ],
  theme: {
    extend: {},
  },
  plugins: [],
}
Enter fullscreen mode Exit fullscreen mode
  • Add the @tailwind directives to the top of the ./src/index.css file.
@tailwind base;
@tailwind components;
@tailwind utilities;
Enter fullscreen mode Exit fullscreen mode

2.3 Install dependencies

Install react-speech-recognition,axios and react-loader-spinner packages to the app using the following command.

npm i react-speech-recognition axios react-loader-spinner
Enter fullscreen mode Exit fullscreen mode

2.4 Create bot frontend

Add the following code to the App.js file.

import './App.css';
import { useEffect, useState, useRef } from 'react'
import React from 'react'
import arrow from './assets/arrow.png'
import bg from './assets/bg.png'
import mic from './assets/mic.png'
import mic_on from './assets/mic_on.png'
import axios from 'axios';
import SpeechRecognition, { useSpeechRecognition } from 'react-speech-recognition'
import { Dna } from 'react-loader-spinner'

function App() {
  const [userInput, setUserInput] = useState('')
  const [recording, setRecording] = useState(false)
  const [loading, setLoading] = useState(false)
  const [messages, setMessages] = useState([
    {
      msg: "Hello, How can I help you ?",
      fromUser: false,
    }
  ])
  const bottomRef = useRef(null);

  const themes = {
    primaryColor: "#475569",
    secondryColor: "#475569",
    primaryFontColor: "white",
    secondryFontColor: "#2C3333",
    logoColor: "#E7F6F2",
    backgroudImage: bg
  }

  const commands = [
    {
      command: 'clear',
      callback: ({ resetTranscript }) => resetTranscript()
    },
    {
      command: 'reset',
      callback: ({ resetTranscript }) => resetTranscript()
    }
  ]

  const {
    transcript,
    resetTranscript,
    browserSupportsSpeechRecognition
  } = useSpeechRecognition({ commands });

  useEffect(() => {
    if (userInput !== '') {
      setLoading(true)
      axios.get(`http://localhost:8000/api/response?message=${userInput}`)
        .then((response) => {

          speechSynthesis.cancel();
          let utterance = new SpeechSynthesisUtterance(response.data);
          speechSynthesis.speak(utterance);

          setUserInput('')
          resetTranscript()
          setMessages([...messages, { msg: response.data, fromUser: false }])
          setLoading(false)
        }, (error) => {
          console.log(error);
        });
    }
  // eslint-disable-next-line
  }, [messages])

  useEffect(() => {
    setUserInput(transcript)
  }, [transcript]);

  const sendMessage = () => {
    if (userInput !== '') {
      setMessages([...messages, { msg: userInput, fromUser: true }]);
    }
  };

  if (!browserSupportsSpeechRecognition) {
    alert("Browser doesn't support speech recognition.")
  }

  const handleRecording = () => {
    if (recording) {
      SpeechRecognition.stopListening()
    }
    else {
      resetTranscript()
      setUserInput('')
      SpeechRecognition.startListening({ continuous: true })
    }
    setRecording(!recording)
  }

  return (
    <div className="min-h-screen bg-gray-100" style={{ background: `url(${themes.backgroudImage})`, backgroundSize: 'cover' }}>

      <div style={{ backgroundColor: themes.primaryColor }} className={`w-full h-18  fixed flex justify-between`}>
        <div style={{ color: themes.logoColor }} className='text-green-100 text-3xl font-bold p-5 font-sans'>VBot</div>
      </div>

      <div className='py-32'>
        <div className="max-w-2xl mx-auto space-y-12 grid grid-cols-1 overflow-y-auto scroll-smooth scrollbar-hide overflow-x-hidden" style={{ maxHeight: '30rem' }}>
          {loading && 
            <div className='flex justify-center items-center'>
              <Dna visible={true} height="100" width="100" ariaLabel="dna-loading" wrapperStyle={{}} wrapperClass="dna-wrapper" />
            </div>
          }
          <ul>
            {messages && messages.map((message, idx) => {
              return (
                <div key={idx} className={`mt-3 ${message.fromUser ? "place-self-end text-right" : "place-self-start text-left"}`}>
                  <div className="mt-3  p-3 rounded-2xl" 
                    style={{ backgroundColor: message.fromUser ? themes.primaryColor : 'white', 
                             color: message.fromUser ? themes.primaryFontColor : themes.secondryFontColor, 
                             borderTopLeftRadius: !message.fromUser && 0, borderTopRightRadius: message.fromUser && 0 }} >
                    <p className="break-words text-md">
                      {message.fromUser ? message.msg : message.msg}
                    </p>
                  </div>
                </div>
              )
            })}
          </ul>
          <div ref={bottomRef} />
        </div>
      </div>

      <div className={`w-full fixed bottom-0`}>
        <div className='justify-end items-center bg-white rounded-xl flex mx-96 my-3'>
          <input className='p-3 bg-white w-full rounded-l-md border-0 outline-none'
            placeholder="Ask your question..."
            type="text"
            id="message"
            name="message"
            value={userInput}
            onChange={(e) => setUserInput(e.target.value)}
          />
          <button className='bg-white px-4' disabled={!browserSupportsSpeechRecognition} onClick={handleRecording}>
            {recording ? <img className='w-10' src={mic_on} alt="mic"></img> : <img className='w-10' src={mic} alt="mic"></img>}
          </button>
          <button style={{ backgroundColor: themes.secondryColor }} className={`p-4 rounded-r-xl`} onClick={sendMessage}>
            <img className='w-8' src={arrow} alt="arrow" />
          </button>
        </div>
      </div>
    </div>
  )
}

export default App;
Enter fullscreen mode Exit fullscreen mode

Understand the code:

  • Import assets (bg, arrow, mic, mic_on icons), axios, speech recognition and dna (react loader spinner) packages.
  • Initialize states such as userInput, recording, loading and messages. userInput is used to keep the user inputs, recording collects user input from the mic and messages stores the request / response from the bot.
  • The useEffect hook (line no 50–69) calls the backend api (http://localhost:8000/api/response?message=userInput) with user input and stores the responses from the bot in the messages state.
  • The sendMessage() sets the user input inside the messages state if the user input is not empty. This action calls the useEffect() method where the message state added in the dependency array.
  • The handleRecording() method handles the user voice input and stores it in the userInput state.
  • Lines from 97 to 150 sets the UI with a user input text box, audio input icon, send button and message bubbles.
  • The UI accept the user input in the text box as text from console or through the speech recognition module. The userInput state updates on each key press. When the user clicks on the send button, it calls the sendMessage() method and it will set the messages state with userInput value if userInput is not empty. The messages values changes will call the useEffect() and send a request to the backend (http://localhost:8000). The response from the backend will be updated to the messages state with fromUser attribute as false. This value will be displayed on the message bubble as the bot response.

2.5 Run the app

  • Follow step 1.7 to run the backend server.
  • Run the frontend react app using the following command.
npm start
Enter fullscreen mode Exit fullscreen mode

You will get the output as below,
output

There you have it! Your own chatbot powered by GPT3 :)

Applications of ChatGPT

ChatGPT is an amazing tool that can be used to enhance your productivity and learning. Let’s explore some of the ways you can use this tool to make your life easier and more productive.

Generate ideas and brainstorm

ChatGPT can be beneficial for brainstorming ideas such as a birthday party celebration idea, recipe ideas or entire meal plans.

Understand complicated topics

ChatGPT can give you precise, clear overviews on complex topics in layman’s terms. If you want to understand machine learning, or find out how quantum computing is being used, ChatGPT is the perfect choice to assist you to understand the topics.

Get assistance with coding and debugging

ChatGPT can provide code examples, or help you troubleshoot by providing solutions to common coding issues or errors. It can also help you to look up syntax and parameters.

Train ChatGPT on your own data

You can feed your own data into the model,and fine-tune the parameters of the model with algorithms. ChatGPT will be trained on your specific data. Finally, you can export the model and develop APIs that allow your system to interact with other tools and platforms.

As a writing assistant

Whether it’s a poem, a news article, an email, or an essay, ChatGPT will provide customized content that you can use in different scenarios.

Get personalized recommendations

ChatGPT can act as your shopping assistant, health assistant that provides you with customized recommendations to suit your tasks.

Summarize the latest research

ChatGPT can summarize research works including entire reports, web pages, or studies. It will provide summarize content of your prompt.

Translate text

It is a powerful tool to translate a text into different languages. It supports 95 different languages.

Find data sets

ChatGPT is useful for looking for datasets, it can search through various online databases, repositories, and resources to find relevant data sets for you. You can use the feature for research, business intelligence, or for training a machine learning model.

Sentiment Analysis

ChatGPT can analyze the words, phrases, and punctuation used in the text and determine the tone of it such as positive, negative, or neutral.

Thanks for reading this article.

Thanks Gowri M Bhatt for reviewing the content.

Thanks Fabius S Thottappilly and Grigary C Antony for the support in creating a custom chatbot using ChatGPT.

If you enjoyed this article, please click on the heart button ♥ and share to help others find it!

The article is also available on Medium.

If you are interested to develop chatbot using Rasa then checkout the following article,

Create your chatbot using Rasa and deploy it on AWS | medium.com

The full source code for this tutorial can be found here,

GitHub - codemaker2015/VBot-custom-gpt3-chatbot: Custom GPT3 powered chatbot using python and react | github.com

Useful links:

Top comments (2)

Collapse
 
simpletechguy profile image
Simple stock guy • Edited

hi, I do no see step for training the model with article data?

Collapse
 
codemaker2015 profile image
Vishnu Sivan

I have used prompting instead of training the data. GPT3 doesn't have any training mechanism whereas it uses fine tuning