Introduction
LLM applications are becoming increasingly popular. However, there are numerous LLM models, each with its differences. Handling streaming output can be complex, especially for new front-end developers.
Thanks to the AI SDK developed by Vercel, implementing LLM chat in next.js with streaming output has become incredibly easy. Next, I'll provide a step-by-step tutorial on how to integrate Google Gemini into your front-end project.
Create a Google AI Studio Account
Head to Google AI Studio and signup, after you login, you can find the button “Get API Key” on the left, click it and create a API Key. This API Key will be used later.
Create a New Next.js Project
To create a new Next.js project, enter the command npx create-next-app@latest your-new-project
. Make sure you choose App route mode. After that, run npm dev
and open localhost:3000
in your preferred browser to verify if the new project is set up correctly.
Next, you need to install the AI SDK:
pnpm install ai
The AI SDK uses an advanced provider design, allowing you to implement your own LLM provider. Currently, we only need to install the official Google Provider.
pnpm install @ai-sdk/google
Set Your API Key in Your Local Environment
Next.js integrates well with environment variables. Simply create a file named .env.local
in the root folder of your project.
GOOGLE_GENERATIVE_AI_API_KEY={your API Key}
Afterwards, the AI SDK will automatically load your key when you use Google AI to generate text.
Server-Side Code
Now that you've gathered all the prerequisites for your LLM application, create a new file named actions.ts
in the app
folder:
"use server";
import { google } from "@ai-sdk/google";
import { streamText } from "ai";
import { createStreamableValue } from "ai/rsc";
export interface Message {
role: "user" | "assistant";
content: string;
}
export async function continueConversation(history: Message[]) {
"use server";
const stream = createStreamableValue();
const model = google("models/gemini-1.5-pro-latest");
(async () => {
const { textStream } = await streamText({
model: model,
messages: history,
});
for await (const text of textStream) {
stream.update(text);
}
stream.done();
})().then(() => {});
return {
messages: history,
newMessage: stream.value,
};
}
Let me provide some explanation about this code.
-
interface Message
is a shared interface that establishes the structure of a message. It includes two properties: 'role' (which can be either 'user' or 'assistant') and 'content' (the actual text of the message). - The
continueConversation
function is a server component function which uses the history of the conversation to generate the assistant's response. The function communicates with Google's Gemini model to generate a streaming text output. - The
streamText
function is part of the AI SDK and it creates a text stream that will be updated with the assistant's response as it is generated.
Client-Side Code
Next, replace the contents of page.tsx
with the new code:
"use client";
import { useState } from "react";
import { continueConversation, Message } from "./actions";
import { readStreamableValue } from "ai/rsc";
export default function Home() {
const [conversation, setConversation] = useState<Message[]>([]);
const [input, setInput] = useState<string>("");
return (
<div>
<div>
{conversation.map((message, index) => (
<div key={index}>
{message.role}: {message.content}
</div>
))}
</div>
<div>
<input
type="text"
value={input}
onChange={(event) => {
setInput(event.target.value);
}}
/>
<button
onClick={async () => {
const { messages, newMessage } = await continueConversation([
...conversation,
{ role: "user", content: input },
]);
let textContent = "";
for await (const delta of readStreamableValue(newMessage)) {
textContent = `${textContent}${delta}`;
setConversation([
...messages,
{ role: "assistant", content: textContent },
]);
}
}}
>
Send Message
</button>
</div>
</div>
);
}
This is a very simple UI you can continue talk with LLM model now. There are some important snips:
- The
input
field captures the user's input. It is controlled by a React state variable that gets updated every time the input changes. - The
button
has anonClick
event that triggers thecontinueConversation
function. This function takes the current conversation history, appends the user's new message, and waits for the assistant's response. - The
conversation
array holds the history of the conversation. Each message is displayed on the screen, and new messages are appended at the end. By usingreadStreamableValue
from the AI SDK, we're able to read the streaming output value from the server component function and update the conversation in real-time.
Let's Test Now
I type "who are you" into the input placeholder.
Here is the output of Google Gemini. You'll notice that the output is printed in a streaming manner.
References
- Documentation for the AI SDK: https://sdk.vercel.ai/docs/introduction
- Google AI Studio: https://ai.google.dev/aistudio
Conclusion
In this post, I've explored the key features and benefits of Google Gemini in front-end.
If you're interested in seeing Google Gemini in action, check out these products that have successfully implemented it:
- AI Math Solver - A webapp that help users to solve math problems. Learn more: AIMathSolver
Have you used Google Gemini in your projects? Share your experiences in the comments below!
Top comments (0)