DEV Community

Cover image for AI Series Part III: Creating a chatbot with OpenAI GPT Model (NextJS)
Tiago Souto
Tiago Souto

Posted on

AI Series Part III: Creating a chatbot with OpenAI GPT Model (NextJS)

In the previous posts, I covered some shallow AI concepts and mentioned a few tips to help you use AI tools to improve your performance as a developer. Now it’s time to take a look at some code and build a simple chatbot.

In this post, I’ll assume you have some previous basic knowledge of React and NextJS, so my goal is to focus on the OpenAI integration.

Getting Started

First of all, you need to have an account at OpenAI to be able to generate an API Key, which is mandatory for the code we’ll create can work properly. If you don’t have an account already, please visit the OpenAI (https://openai.com/) webpage and create an account.

After you log in, go to the Apps page (https://platform.openai.com/apps) and select the API

Image description

In the left menu, select API Key

Image description

Click on Create new secret key, add a name, and click on create button. You’ll be prompted with your secret key. Copy this and save it in a file, we’ll use this key soon.

Now go to the Settings, then to Billing.

Image description

If it’s your first time on the platform, you might see $5 in your credit balance, which OpenAI gives for free for people to test. If your balance is $0, then you’ll have to add credits using a payment method. For now, $5 is more than enough for the tests we’re about to do. But for the next posts, you might need to add more funds — we’ll see.

Okay, now that we have the API Key and funds, we can start with the code.

NextJS App Setup

First, let’s create our NextJS app by running

npx create-next-app@latest nextjs-chat-openai
Select the following settings:

  • TypeScript
  • ESLint
  • Tailwind CSS
  • src/ directory
  • App Router
  • Import Alias (@/*)

Now open the project directory and install the following dependencies. For this post, I’ll use pnpm, but you can use npm or yarn if you prefer.

pnpm add ai class-variance-authority clsx date-fns highlight.js lucide-react openai rehype-highlight react-markdown tailwind-merge tailwindcss-animate zustand

  • ai: it’s a helper library for working with ai chat, streaming, and more
  • class-variance-authority, clsx, tailwind-merge, and tailwindcss-animate: are tailwind helpers we’ll use to prevent class conflicts, add conditional styles, and more
  • date-fns: will be used to parse dates
  • highlight.js, rehype-highlight, and react-markdown: they’ll be used to show code blocks in the chat messages
  • lucide-react: icons library
  • openai: we need to use OpenAI API
  • zustand: for global state management

Now let’s install some dev dependencies:

pnpm add -D @tailwindcss/typography css-loader style-loader prettier prettier-plugin-tailwindcss zod

  • @tailwindcss/typography, css-loader, style-loader, prettier, and prettier-plugin-tailwindcss: will be used to set tailwind configs and prettify
  • zod: will be used for API validation

You can add more dependencies as you feel like, such as lint staged, husky, and others. But for this post purposes, those are enough.

Now let’s set the .prettierrc.cjs file. You can add your own preferences or skip this if you don’t like prettier:

module.exports = {
  bracketSpacing: false,
  jsxBracketSameLine: true,
  singleQuote: true,
  trailingComma: 'es5',
  semi: false,
  printWidth: 100,
  tabWidth: 2,
  useTabs: false,
  importOrder: ['^\\u0000', '^@?\\w', '^[^.]', '^\\.'],
  importOrderSeparation: true,
  plugins: ["prettier-plugin-tailwindcss"]
};
Enter fullscreen mode Exit fullscreen mode

We’ll use shadcn-ui to install some UI components. Follow the steps 2 and 3 from their guide here: https://ui.shadcn.com/docs/installation/next

Then, install the following components:

It’d take a bunch of time to walk through creating all the components. So I recommend you go to my GitHub repo and copy the following files:

  • /components/Avatar
  • /components/Chat
  • /components/Root
  • /components/Message
  • page.tsx
  • layout.tsx

Also the /lib folder.

A brief explanation of every component:

  • Avatar.tsx: a wrapper component that receives a children and sets the common styles for the avatar
  • BotAvatar.tsx: uses the base Avatar.tsx component and renders a robot icon
  • UserAvatar.tsx: same as BotAvatar, but renders a person icon
  • index.ts: each component folder has its index file. It exports all components so we can use it like ,
  • Chat.tsx: the main chat wrapper, receives a list of messages and renders message item containing the message balloon and avatars
  • ChatInput.tsx: it’s the text input and send button
  • Message.tsx: A wrapper to set the message item row and avatar positioning
  • MessageBalloon.tsx: it renders the message text, and also implements markdown handlers, code highlight, copy, and download buttons
  • Root (index.tsx): contains the html and body tags and global state handlers. This is needed as layout can’t be use client component type
  • lib/store.ts: it handles the global state manager, currently we’re persisting the messages
  • lib/utils.ts: it has a helper function to deal with tailwind classes

The page.tsx renders the chat, the input, and handles the logics and api calls

OpenAI Integration

Okay, we have the basics to start working. Now, it’s time to use the OpenAI Key we created before. In the root of the project, create a .env file and add the OPENAI_API_KEY=your_key

Next, create a file in the lib folder and name it openai.ts . In this file we'll initialize the OpenAI API object as the following:

import {OpenAI} from 'openai'

export const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
})
Enter fullscreen mode Exit fullscreen mode

Now, let’s create a new folder inside the src called api and then create a route.ts file inside of it.

We’ll start importing the modules we’ll need:

import {OpenAIStream, StreamingTextResponse} from 'ai' // helpers to deal with ai chat streaming
import {NextResponse} from 'next/server' // NextJS response helper
import {ChatCompletionMessageParam} from 'openai/resources/index.mjs' // type definition
import {z} from 'zod' // used for API scheme validation
import {openai} from '@/lib/openai' // our openai initializer
Enter fullscreen mode Exit fullscreen mode

Then, we’ll create the system prompt that’ll be sent to the OpenAI GPT model. As we’ve mentioned previously in the basic concepts post, the system prompt is the instruction that we’ll send to the LLM in order to define its behavior. Here’s how we set it:

const generateSystemPrompt = (): ChatCompletionMessageParam => {
  const content = `You are a chat bot and will interact with a user. Be cordial and reply their messages using markdown syntax if needed. If markdown is a code block, specify the programming language accordingly.`
  return {role: 'system', content}
}
Enter fullscreen mode Exit fullscreen mode

And, finally, we’ll start writing our POST method that will be called by the frontend.

First, we’ll start with the basic definition and get the prompt argument sent in the HTTP request:

export async function POST(request: Request) {
  const body = await request.json()
  const bodySchema = z.object({
    prompt: z.string(),
  })
  const {prompt} = bodySchema.parse(body)
Enter fullscreen mode Exit fullscreen mode

We use zod to specify the expected argument is a string.

Now we can call our system prompt generator function and store it in a variable so we can use it later

const systemPrompt = generateSystemPrompt()
We’re almost done. Now it’s time to make the request to OpenAI passing some arguments in order to get the GPT response:

try {
    const response = await openai.chat.completions.create({
      model: 'gpt-3.5-turbo-16k',
      temperature: 0.5,
      messages: [systemPrompt, {role: 'user', content: prompt}],
      stream: true,
    })
Enter fullscreen mode Exit fullscreen mode

We’re using chat.completions method to create a chat request.

We’re pre-defining the LLM we want to use in the model property. You can replace that with other available models like GPT-4. But keep in mind that different models have different costs.

The temperature means how creative we want the LLM to be, it's a range from 0 to 1, where 0 means we don't want it to be creative (it'll follow the instructions and respond exactly what'd been asked in the prompt) and 1 means we want it to be very creative (it might include additional information and details that's related to the prompt, but that's not been asked for). Each temperature value has a purpose depending on the app we're building. For this example, we'll keep the default 0.5 .

The messages attribute is the list of messages from the chat. We could also add the chat history here to make the LLM aware of the whole conversation context. But for now, we're just passing the system instructions and the user prompt.

Stream is a boolean that defines whether we want the response to be received in streams or if it should wait for it to be ready and sent all at once.

And, finally, we just return the response to the frontend

const stream = OpenAIStream(response)
  return new StreamingTextResponse(stream)
} catch (error) {
  console.log('error', error)
  return new NextResponse(JSON.stringify({error}), {
    status: 500,
    headers: {'content-type': 'application/json'},
  })
}
Enter fullscreen mode Exit fullscreen mode

Running the App

We’re done! If everything’s good, you should be able to test the app by running yarn build && yarn start

Access http://localhost:3000 and you can start chatting with OpenAI GPT-3.5.

Image description

This is a very basic example just to give you a starting point. Many other improvements can be implemented like selecting different models, limiting token usage, chat history, and much more.

Extra: Running on Docker

Another way of running the app is using Docker. For now, it’s not needed, that’s why I didn’t include it in the main scope. But this will be helpful for future posts as we’ll start integrating new features. So feel free to add it now so you can use this first project as a base for what’s coming next.

First, create a Dockerfile in the root of the project and add the following:

ARG PNPM_VERSION=8.7.1
FROM node:20.6.1

COPY . ./chat-app
WORKDIR /chat-app
RUN npm install -g pnpm@${PNPM_VERSION}
ENTRYPOINT pnpm install && pnpm run build && pnpm start
Then, create a docker-compose.yaml file in the root of the project and add the following:

services:
  chat-app:
    container_name: chat-app
    build:
      context: .
      dockerfile: Dockerfile
    environment:
      OPENAI_API_KEY: ${OPENAI_API_KEY}
    ports:
      - 3000:3000
    entrypoint: sh -c "pnpm install && pnpm run build && pnpm run dev"
    working_dir: /chat-app
    volumes:
      - .:/chat-app
Enter fullscreen mode Exit fullscreen mode

Now, if you run docker-compose up you'll be able to see the app up and running. Make sure to have Docker installed and running on your machine before running this command.

We’re going to explore more topics in the coming posts.

See you there!

GitHub code repository: https://github.com/soutot/ai-series/tree/main/nextjs-chat-openai

Top comments (0)