DEV Community

Cover image for Integrating OpenAI's GPT-3 into a Next.js and Go Fiber App
Simon
Simon

Posted on

Integrating OpenAI's GPT-3 into a Next.js and Go Fiber App

In this article, we will learn how to integrate OpenAI's GPT-3 into a Go backend App and interact with the app using a Next.js frontend.

Photo by Mariia Shalabaieva on Unsplash

Introduction

AI is the current buzzword in the tech industry. One of the best use cases I have seen is the use of AI to summarize complex text into simple text. This is very useful for students and researchers who have to read a lot of text to get the information they need.

So in this tutorial, we will build a simple web app that will summarize text using OpenAI's GPT-3. We will build the backend using Go and the front end using Next.js. We will also use Tailwind CSS for styling.

Here is the GitHub repo for the project:

Prerequisites

This is a beginner-level tutorial but you should have at least a basic understanding of the following as I will not be explaining them in too much detail:

  • TypeScript, Next.js, and Tailwind
  • Go and Fiber

You don't need to be an expert in any of these to follow along as I will break down whatever I see as complex. Also, you will need the following to follow along with this tutorial:

  • Node.js installed on your computer
  • Go installed on your computer
  • A text editor (preferably VS Code)
  • A premium OpenAI account

What is OpenAI's GPT-3?

GPT-3 is a language model developed by OpenAI. It is the third-generation language prediction model in the GPT-n series created by OpenAI, a San Francisco-based artificial intelligence research laboratory. It is the successor to GPT-2 and was unveiled in May 2020.

I'm sure you must have at least heard of chatGPT or GitHub Copilot. These are some of the applications of GPT-3. You can learn more about GPT-3 here.

Project Setup

First, let's create a new folder and open up the folder in your editor. Here's how to do that in your terminal:

mkdir ai-summarizer && cd ai-summarizer && code .
Enter fullscreen mode Exit fullscreen mode

Now initialize a new Next.js app inside a new folder named "client" as shown below:

yarn create next-app client
Enter fullscreen mode Exit fullscreen mode

Accept the defaults when prompted.

Next.js Project Setup

Now create a new folder named "server" and initialize a new Go module inside the server folder as shown below:

mkdir server && cd server
go mod init github.com/your-username/ai-summarizer/server
Enter fullscreen mode Exit fullscreen mode

Replace "your-username" with your GitHub username. When that is done, run the following command to install the dependencies we will need in the server:

go get -u github.com/gofiber/fiber/v2 github.com/joho/godotenv github.com/sashabaranov/go-openai
Enter fullscreen mode Exit fullscreen mode

The command above will install the Fiber web framework, the godotenv package for loading environment variables from a .env file, and the community-maintained Go OpenAI SDK for interacting with the OpenAI API.

Next, create a new file named ".env" in the server folder and add the following code:

OPENAI_API_KEY="your-openai-api-key"
Enter fullscreen mode Exit fullscreen mode

NOTE: As of the time of writing, OpenAI no longer supports trial accounts. You will need to upgrade to a paid account to make requests to the API.

Finally, create a new file named "main.go" in the server folder and add the following code:

package main

import (
  "fmt"
)

func main() {
  fmt.Println("Hello World!")
}
Enter fullscreen mode Exit fullscreen mode

Your project structure should look like this:

Project Structure

We'll come back to the server later to modify the main.go file. For now, let's set up the frontend.

Building the Frontend

We'll not be doing anything overly complex in the frontend. All we need is a simple form that will collect the text to be summarized and a button to submit the form to our backend. We'll also need a section to display the summarized text. Copy the following code into your app/page.tsx file:

"use client";

import { FormEvent, useState } from "react";

export default function Home() {
  const [summary, setSummary] = useState("");

  const handleFormSubmit = async (event: FormEvent) => {
    event.preventDefault();
    const form = event.target as HTMLFormElement;

    const formData = {
      content: form.text.value as string,
    };

    const response = await fetch("http://localhost:8000/summary", {
      body: JSON.stringify(formData),
      headers: {
        "Content-Type": "application/json",
      },
      method: "POST",
    });

    const data = await response.json();
    setSummary(data.summary);
  };

  return (
    <div className="flex min-h-screen min-w-fit items-center justify-center">
      <div className="flex h-auto w-auto flex-col rounded-xl bg-white p-4 dark:bg-black">
        <header className="m-4 flex justify-center">
          <h1 className="mx-8 text-4xl font-bold">AI Summarizer</h1>
        </header>
        <main className="flex flex-col items-center justify-between p-12">
          <form
            onSubmit={handleFormSubmit}
            className="flex flex-col items-center justify-between"
          >
            <textarea
              name="text"
              className="w-96 h-96 p-4 border-2 border-gray-300 rounded-xl dark:text-black"
              placeholder="Enter text here..."
              required
            ></textarea>
            <button className="mt-4 w-[100%] h-12 bg-yellow-500 text-white font-bold rounded-xl dark:bg-blue-500">
              Simplify
            </button>
          </form>
          {summary !== "" ? (
            <p className="mt-4 text-center font-bold ">{summary}</p>
          ) : null}
        </main>
        <footer>
          <p className="text-center text-gray-500">
            Powered by <a href="https://platform.openai.com/overview">OpenAI</a>
          </p>
        </footer>
      </div>
    </div>
  );
}
Enter fullscreen mode Exit fullscreen mode

All the code above does is create a form with a textarea and a button. When the form is submitted, the text is sent to the backend which will then summarize the text using the OpenAI API and send back a response which will then be displayed below the form.

You can check out the Tailwind CSS documentation to learn more about the utility classes used in the code above.

Next, just modify the app/global.css file to add some global styles to the app:

@tailwind base;
@tailwind components;
@tailwind utilities;

:root {
  --foreground-rgb: 0, 0, 0;
  --background-start-rgb: 230, 194, 41;
  --background-end-rgb: 241, 113, 5;
}

@media (prefers-color-scheme: dark) {
  :root {
    --foreground-rgb: 255, 255, 255;
    --background-start-rgb: 102, 16, 242;
    --background-end-rgb: 26, 143, 227;
  }
}

body {
  color: rgb(var(--foreground-rgb));
  background: linear-gradient(
      to bottom right,
      transparent,
      rgb(var(--background-end-rgb))
    ) rgb(var(--background-start-rgb));
}
Enter fullscreen mode Exit fullscreen mode

You can now start the frontend by running the following command in the terminal:

yarn dev
Enter fullscreen mode Exit fullscreen mode

Although the frontend is up and running, it's not very useful because we haven't implemented the backend yet. Let's do that next.

Building the Backend

Navigate to your server folder and open the main.go file. Replace the contents of the file with the following code:

package main

import (
    "context"
    "fmt"
    "log"
    "net/http"
    "os"

    "github.com/joho/godotenv"
    openai "github.com/sashabaranov/go-openai"

    "github.com/gofiber/fiber/v2"
    "github.com/gofiber/fiber/v2/middleware/cors"
    "github.com/gofiber/fiber/v2/middleware/logger"
)

func init() {
    if err := godotenv.Load(); err != nil {
        log.Fatalln(err)
    }
}

func main() {
    app := fiber.New()

    app.Use(cors.New(cors.Config{
        AllowOrigins: "http://localhost:3000",
        AllowHeaders: "Origin, Content-Type, Accept",
    }))

    app.Use(logger.New())

    app.Post("/summary/", getSummary)

    log.Fatal(app.Listen(":8000"))
}
Enter fullscreen mode Exit fullscreen mode

During the initialization of the app, we load the environment variables from the .env file using the godotenv package.

Then the main function creates a new Fiber app and adds the CORS and logger middleware to the app. It also creates a POST route at the /summary endpoint and calls the getSummary function when the route is hit.

Next, add the following code to the main.go file to create the getSummary function:

// Underneath the init function
func getSummary(c *fiber.Ctx) error {
    // Create a new OpenAI client
    client := openai.NewClient(os.Getenv("OPENAI_API_KEY"))

    var jsonData map[string]interface{}

    // Parse body from request into JSON
    if err := c.BodyParser(&jsonData); err != nil {
        return err
    }

    // Call the CreateChatCompletion method with the client
    resp, err := client.CreateChatCompletion(
        context.Background(),

        // Create a new ChatCompletionRequest with the
        // GPT-3.5 model and a single message
        openai.ChatCompletionRequest{
            Model: openai.GPT3Dot5Turbo,
            Messages: []openai.ChatCompletionMessage{
                {
                    Role:    openai.ChatMessageRoleUser,
                    Content: jsonData["content"].(string),
                },
            },
        },
    )

    // Log any errors and return a 500 status code if there are any
    if err != nil {
        fmt.Printf("ChatCompletion error: %v \n\n", err)

        return c.JSON(
            fiber.Map{
                "statusCode": http.StatusInternalServerError,
                "summary":    "Something went wrong...",
            },
        )
    }

    // Return the first choice from the response
    return c.JSON(
        fiber.Map{
            "statusCode": http.StatusOK,
            "summary":    resp.Choices[0].Message.Content,
        },
    )
}
Enter fullscreen mode Exit fullscreen mode

First, we create a new OpenAI client using the OPENAI_API_KEY environment variable. Then we parse the request body into a JSON object and call the CreateChatCompletion method with the client. This method takes a ChatCompletionRequest object as the second argument. This object contains the model to use and the messages to send to the model.

Finally, we return the first choice from the response as a JSON object with the status code 200.

Now that we have created the backend, we can start it out by running the following command in the terminal:

go run main.go
Enter fullscreen mode Exit fullscreen mode

You should now be able to type in a message in the frontend and get a summary of the message back from the backend.

Completed App

Conclusion

We merely scratched the surface of what you can do with the OpenAI API in this tutorial. There are several various methods that you can use to interact with the OpenAI API. You can check out the Go client documentation to learn more about the various methods.

I might also write a tutorial on how to create a customer support chatbot using the OpenAI API with Rust in the future. Let me know in the comments if you would be interested in that.

Resources and References

You can check out some of the resources listed below to learn more about OpenAI and the technologies used in this tutorial.

Top comments (0)