DEV Community

Moeki Kawakami
Moeki Kawakami

Posted on

Achieve streaming output like the ChatGPT mainframe

In this example, we use Langchain, but perhaps the original API alone can be used.

Welcome to LangChain | 🦜️🔗 Langchain

First, the server side.

import { ChatOpenAI } from "langchain/chat_models"
import { HumanChatMessage } from "langchain/schema"
import { CallbackManager } from "langchain/callbacks"

const api = (req, res, ctx) => {
  if (req.method !== "POST") return
  const input = req.body.input

  const chat = new ChatOpenAI({
    openAIApiKey: OPENAI_API_KEY,
    streaming: true
    callbackManager: CallbackManager.fromHandlers({
      async handleLLMNewToken(token: string) {
        res.write(token)
      },
    }),
  })

  await chat.call([new HumanChatMessage(input)])

  res.end()
})
Enter fullscreen mode Exit fullscreen mode

The point is to enable streaming with streaming: true and write(). This is called Writable streams in Node.js and is enabled for files and responses.

When all the Langchain processing is done, we terminate it with end().

Next is the client side.

const reply = async (input) => {
  const decoder = new TextDecoder()

  const res = await fetch(ENDPOINT, {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
    },
    body: JSON.stringify({
      input,
    }),
  })
  const reader = res?.body?.getReader()

  let output = ""

  while (true) {
    if (reader) {
      const { value, done } = await reader.read()
      if (done) break
      output += decoder.decode(value)
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

The point is getReader(). Also, we need to decode it, so we put that process in.

Top comments (0)