DEV Community

Cover image for ChatGPT Part I: AI-Powered Text Generation with Next.js and OpenAI
Alfredo Baldoceda
Alfredo Baldoceda

Posted on • Updated on

ChatGPT Part I: AI-Powered Text Generation with Next.js and OpenAI

In today's digital age, automation and artificial intelligence (AI) have revolutionized various industries, including content creation and customer support. One such powerful tool is AI-powered text generation, which can assist in generating content, answering queries, and even enhancing creative writing. In this article, we will explore how Next.js, a popular React framework for building web applications, can be combined with the OpenAI API to leverage AI-powered text generation capabilities.


Introduction to Next.js and OpenAI API

Next.js stands as a JavaScript framework that empowers developers to construct server-side rendered (SSR) React applications. Boasting a robust ecosystem, it provides a solid foundation for building efficient and scalable web applications.

On the other hand, the OpenAI API serves as a powerful language model, capable of generating text that closely resembles human-written content. By seamlessly integrating Next.js with the OpenAI API, we gain the ability to develop dynamic and interactive web applications that generate AI-generated content in real-time.


Setting up the Next.js Application

To get started, we need to set up a Next.js application and configure the OpenAI API. Here are the steps:

1. Create a new Next.js project by running:

$ npx create-next-app chatbot-app
Enter fullscreen mode Exit fullscreen mode

2. Configure the OpenAI API by creating an account on the
OpenAI website and obtaining an API key.

3. Install the OpenAI package by running:

$ cd chatbot-app
$ npm install openai
Enter fullscreen mode Exit fullscreen mode

4. Set up the OpenAI API configuration in your Next.js application. Create a new file, openai.js, and add the following code:

import { Configuration, OpenAIApi } from 'openai';

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
});

const openai = new OpenAIApi(configuration);

export default openai;
Enter fullscreen mode Exit fullscreen mode

Make sure to replace process.env.OPENAI_API_KEY with your actual API key.

As an alternative, you can generate a .env file containing your secret API key.

Creating the AI-Powered Text Generation Endpoint

Now that we have our Next.js application and OpenAI API configured, let's create an endpoint that utilizes the AI-powered text generation. We will create a serverless function that handles HTTP POST requests and generates AI-generated text based on the provided prompt.

To manage the http requests we use the Route Handlers Method functions from the new Next.js 13.4 App Router. For more details go to https://nextjs.org/docs/app/building-your-application/routing/router-handlers#supported-http-methods.


The Davinci Model

export async function POST(req: Request) {
  const { prompt } = await req.json();

  if (!prompt || prompt === '') {
    return new Response('Please send your prompt', { status: 400 });
  }

  const aiResult = await openai.createCompletion({
    model: 'text-davinci-003',
    prompt: `${prompt}`,
    temperature: 0.9,
    max_tokens: 2048,
    frequency_penalty: 0.5,
    presence_penalty: 0
  });

  const response = aiResult.data.choices[0].text?.trim() || 'Sorry, there was a problem!';

  return NextResponse.json({ text: response });
}
Enter fullscreen mode Exit fullscreen mode

The POST function is called when an HTTP POST request is made to the serverless function. It expects the request body to contain a JSON object with a prompt property.

If the prompt is missing or empty, a response with the text 'Please send your prompt' and a status code of 400 (Bad Request) is returned.

If the prompt is provided, an AI completion is generated using the OpenAI API. The openai.createCompletion function is called with various parameters, including the model to use ('text-davinci-003'), the provided prompt, and additional settings like temperature, max tokens, frequency penalty, and presence penalty.

The response from the API call is stored in the aiResult variable. The generated text is extracted from aiResult.data.choices[0].text?.trim(). If no text is generated, a fallback message of 'Sorry, there was a problem!' is used.

Finally, the response is sent using NextResponse.json(), wrapping the generated text in a JSON object with the property text.


Here's an explanation of the parameters provided to the OpenAI API's create_completion method:

1. model: 'text-davinci-003'

  • This parameter specifies the model to be used for generating the completion. In this case, 'text-davinci-003' refers to the specific version of the GPT-3.5 model developed by OpenAI. Different models may have varying capabilities and performance.

2. prompt: ${prompt}

  • This parameter represents the text that serves as the starting point or input for the completion. You can replace ${prompt} with the actual text you want to use as the prompt.

3. temperature: 0.9

  • The temperature parameter controls the randomness of the generated output. A higher temperature, such as 1.0, makes the output more diverse and creative, while a lower temperature, like 0.1, makes it more focused and deterministic. A value of 0.9 indicates a relatively high temperature, resulting in more varied responses.

4. max_tokens: 2048

  • This parameter determines the maximum number of tokens in the generated completion. Tokens are chunks of text, which can be as short as one character or as long as one word. Setting max_tokens to 2048 ensures that the completion will not exceed that length. Be aware that longer completions may incur higher costs and take more time to generate.

5. frequency_penalty: 0.5

  • The frequency penalty is used to discourage the model from repeating the same words or phrases excessively in its output. A higher value, such as 1.0, will strongly penalize repeated text, while a lower value, like 0.0, will not penalize repetition. A value of 0.5 indicates a moderate frequency penalty, encouraging the model to produce varied and diverse responses.

6. presence_penalty: 0

  • The presence penalty is employed to discourage the model from mentioning specific words or phrases in its output. A higher value, such as 1.0, will strongly penalize the inclusion of specific terms, while a lower value, like 0.0, will not penalize their presence. A value of 0 implies no presence penalty, allowing the model to freely include any relevant terms.

Turbo 3.5 GPT Model

This model differs slightly from the Davinci model in regard to the request data required and the response that produces

export async function POST(req: Request) {
  const { prompt } = await req.json();
  // console.log(prompt)
  //
  //

  if (!prompt || prompt === "") {
    return new Response("Please send your prompt", { status: 400 });
  }

  const aiResult = await openai.createChatCompletion({
    model: "gpt-3.5-turbo",
    messages: [{ role: "user", content: `${prompt}` }],
    temperature: 0.9,
    max_tokens: 2048,
    frequency_penalty: 0.5,
    presence_penalty: 0,
  });

  const response =
    aiResult.data.choices[0].message?.content?.trim() ||
    "Sorry, there was a problem!";

  // console.log(response)

  return NextResponse.json({ text: response });
Enter fullscreen mode Exit fullscreen mode

Here an explanation of the parameters that are different for the "gpt-3.5-turbo" model:

1. model: "gpt-3.5-turbo"

  • This parameter specifies the model to be used for generating the completion. In this case, "gpt-3.5-turbo" refers to a specific version of the GPT-3.5 model known for its speed and cost-effectiveness compared to the base GPT-3 models.

2. messages: [{ role: "user", content: ${prompt} }]

  • Instead of using a traditional prompt, the "messages" parameter allows for more interactive conversations. Each message object represents a role ("user" or "assistant") and the content of the message. In this case, there is a single message from the "user" role with the content of ${prompt}. You can replace ${prompt} with the actual text of the user's message.

Setting up the Chat interface

Using the Next.js App Router, we set up our interface that will make calls to our existing API created in the previous step.


ChatMessage Component

The ChatMessage component is responsible for rendering individual chat messages:

const ChatMessage = ({ text, from }: MessageProps) => {
  return (
    <>
      {from == Creator.Me && (
        <div className="bg-white p-4 rounded-lg flex gap-4 items-center whitespace-pre-wrap">
          <Image src={mePic} alt="Me" width={40} />
          <p className="text-gray-700">{text}</p>
        </div>
      )}
      {from == Creator.Bot && (
        <div className="bg-gray-100 p-4 rounded-lg flex gap-4 items-center whitespace-pre-wrap">
          <Image src={botPic} alt="Bot" width={40} />
          <p className="text-gray-700">{text}</p>
        </div>
      )}
    </>
  );
};
Enter fullscreen mode Exit fullscreen mode
  • The ChatMessage component receives the text and from props, destructured from MessageProps.
  • Conditional rendering is used to differentiate between the user's messages (Creator.Me) and the bot's responses (Creator.Bot).
  • When the message is from the user, a white-colored message bubble with the user's image is rendered.
  • When the message is from the bot, a gray-colored message bubble with the bot's image is rendered.

ChatInput Component

The ChatInput component handles the user input and sending messages:

const ChatInput = ({ onSend, disabled }: InputProps

) => {
  const [input, setInput] = useState("");

  const sendInput = () => {
    onSend(input);
    setInput("");
  };

  const handleKeyDown = (event: any) => {
    if (event.keyCode === 13) {
      sendInput();
    }
  };

  return (
    <div className="bg-white border-2 p-2 rounded-lg flex justify-center">
      <input
        value={input}
        onChange={(event: any) => setInput(event.target.value)}
        className="w-full py-2 px-3 text-gray-800 rounded-lg focus:outline-none"
        type="text"
        placeholder="Ask me anything"
        disabled={disabled}
        onKeyDown={(ev) => handleKeyDown(ev)}
      />
      {disabled && (
        <svg
          aria-hidden="true"
          className="mt-2 inline w-6 h-6 mx-2 text-gray-100 animate-spin dark:text-gray-300 fill-gray-500"
          viewBox="0 0 100 101"
          fill="none"
          xmlns="http://www.w3.org/2000/svg"
        >
          <!-- SVG path data -->
        </svg>
      )}
      {!disabled && (
        <button
          className="p-2 rounded-md text-gray-500 bottom-1.5 right-1"
          onClick={() => sendInput()}
        >
          <AiOutlineSend size={20} />
        </button>
      )}
    </div>
  );
};
Enter fullscreen mode Exit fullscreen mode
  • The ChatInput component receives the onSend and disabled props, destructured from InputProps.
  • The component manages the user input using the useState hook. The input state variable stores the current value of the input field.
  • When the user clicks the send button or presses Enter, the sendInput function is called, which invokes the onSend callback prop with the input value and clears the input field.
  • The handleKeyDown function listens for the Enter key press and triggers the sendInput function.
  • The component renders a text input field, a loading spinner when disabled is true, and a send button when disabled is false.

ChatGPTPage Component

The main component that brings everything together is ChatGPTPage:

export default function ChatGPTPage({ params }: { params: { model: string } }) {
  const [messages, setMessages, messagesRef] = useState<MessageProps[]>([]);
  const [loading, setLoading] = useState(false);

  const { model } = params;

  console.log(model);

  const callApi = async (input: string) => {
    setLoading(true);

    const myMessage: MessageProps = {
      text: input,
      from: Creator.Me,
      key: new Date().getTime(),
    };

    setMessages([...messagesRef.current, myMessage]);

    const response = await fetch(`/api/${model}`, {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
      },
      body: JSON.stringify({
        prompt: input,
      }),
    }).then((response) => response.json());
    setLoading(false);

    if (response.text) {
      const botMessage: MessageProps = {
        text: response.text,
        from: Creator.Bot,
        key: new Date().getTime(),
      };
      setMessages([...messagesRef.current, botMessage]);
    } else {
      // Error message here
    }
  };

  return (
    <ViewAuth>
      <main className="relative max-w-2xl mx-auto">
        <div className="sticky top-0 w-full pt-10 px-4">
          <Chat

Input onSend={(input) => callApi(input)} disabled={loading} />
        </div>

        <div className="mt-10 px-4">
          {messages.map((msg: MessageProps) => (
            <ChatMessage key={msg.key} text={msg.text} from={msg.from} />
          ))}
          {messages.length == 0 && (
            <p className="text-center text-gray-400">
              Hello there, this is albac.dev ChatGTP bot
            </p>
          )}
        </div>
      </main>
    </ViewAuth>
  );
}
Enter fullscreen mode Exit fullscreen mode
  • The ChatGPTPage component receives a params prop, which is an object containing the model property.
  • The component initializes state variables using the useState hook: messages stores the chat messages, setMessages is the function to update the messages, and messagesRef is a reference to the messages array.
  • The loading state variable is used to indicate if a request is currently being made to the ChatGPT API.
  • The callApi function is responsible for sending user input to the API and handling the bot's response. It updates the messages state to include the user's message and makes a POST request to the /api/${model} endpoint with the user's input as the request payload. The response is then added to the messages state as the bot's message.
  • The component renders the ViewAuth component, which likely handles authentication or authorization.
  • Inside the main element, the ChatInput component is rendered at the top of the page, allowing the user to enter messages. The onSend prop is set to the callApi function, and the disabled prop is set to the loading state variable.
  • The messages state is mapped over to render the ChatMessage components, passing the necessary props.
  • If there are no messages in the messages state, a default greeting message is rendered.

Note: Notice that we are using a different useState hook called react-usestateref. This custom hook extends the functionality of the useState hook in React by providing a way to create a mutable reference to a value. With this hook, you can access and modify the value directly without triggering a re-render of the component.

In the standard useState hook, when you update the state using the setter function returned by useState, React will re-render the component to reflect the new state value. However, there are cases where you might want to update a value without triggering a re-render, or you need to access the current value of the state outside the scope of the component's render function.

The ViewAuth component is used to validate our authentication with AWS Amplify. It checks if the user is authenticated to use this page.


generateStaticParams Function

Lastly, we have a helper function called generateStaticParams:

export function generateStaticParams() {
  return [{ model: "turbo" }, { model: "davinci" }];
}
Enter fullscreen mode Exit fullscreen mode
  • The generateStaticParams function returns an array of objects representing different models that can be used for the ChatGPT interface. In this case, the models are "turbo" and "davinci".

The Final Result!

Finally, you can see the different responses we get when we switch from the Davinci Model to the ChatGPT 3.5 Turbo Model.

ChatGPT Models


This code snippet provides a foundation for creating a chat interface using ChatGPT in a Next.js application. It includes components for rendering chat messages and handling user input, as well as functions for sending and receiving messages from the ChatGPT API. With this code as a starting point, you can further customize and enhance the chat interface to suit your specific requirements.

See the full code for the Davinci Model Next.js Router Handler api here.

See the full code for the Turbo Model Next.js Router Handler api here.

See the full code of the App page interface that makes the call to our api here


Testing it out

Feel free to try out this ChatGPT chatbot on my portfolio at https://albac.dev.

OpenAI ChatGPT


Conclusion

AI-powered text generation opens up exciting possibilities for automating content creation, enhancing customer support, and aiding creative writing. By combining the power of Next.js and the OpenAI API, developers can leverage AI-generated text in real-time, creating dynamic and interactive web applications.

However, it is crucial to use AI responsibly and ensure human oversight. While AI can assist in generating content, human intervention is essential to verify accuracy, maintain ethical standards, and provide a personalized touch. AI should be treated as a tool that complements human efforts rather than replacing them entirely.

With the Next.js framework and the OpenAI API, developers can unlock the potential of AI-powered text generation and build innovative applications that cater to various industries and user needs.


Reference:


Top comments (1)

Collapse
 
jettliya profile image
Jett Liya

Sure, I can help you with that. Here's a brief guide on integrating ChatGPT AI-powered text generation with Next.js and OpenAI:

Setting Up Next.js: Begin by setting up a Next.js project if you haven't already. You can do this by running npx create-next-app in your terminal.

Install Dependencies: You'll need to install the OpenAI JavaScript SDK to interact with the GPT API. You can do this by running:

Copy code
npm install openai
Get OpenAI API Key: Sign up for the OpenAI API and obtain your API key. You'll need this key to authenticate your requests to the GPT API.

Create a Component: Within your Next.js project, create a new component where you'll handle the text generation logic. For example, you can create a component named ChatGPT.js.

Import OpenAI SDK: In your ChatGPT.js file, import the OpenAI SDK and initialize it with your API key.

javascript
Copy code
import openai from 'openai';

const apiKey = 'YOUR_OPENAI_API_KEY';
const openaiInstance = new openai(apiKey);
Text Generation Function: Create a function to generate text using GPT-3.5. You can pass prompts to this function and receive the AI-generated response.

javascript
Copy code
async function generateText(prompt) {
try {
const response = await openaiInstance.complete({
engine: 'text-davinci-003', // Specify the engine
prompt: prompt,
maxTokens: 150 // Maximum number of tokens in the response
});
return response.data.choices[0].text.trim();
} catch (error) {
console.error('Error generating text:', error);
return null;
}
}
Integrate with Next.js Components: You can now use the generateText function within your Next.js components to generate AI-powered text based on user interactions or predefined prompts.

Handling User Input: If you want to generate text based on user input, you can create a form or input field in your Next.js application. When the user submits the input, pass it to the generateText function and display the response.

Styling and UI: Design your components and UI elements to present the generated text in an aesthetically pleasing manner.

Testing and Deployment: Test your application thoroughly to ensure the text generation functionality works as expected. Once satisfied, deploy your Next.js application to your preferred hosting platform.

That's a basic overview of integrating ChatGPT AI-powered text generation with Next.js and the OpenAI API. Feel free to customize and extend this implementation based on your specific requirements and use cases.