Introduction
From being a fantasy to becoming a reality, Generative AI has proven to be a beneficial tool for many of us. It has boosted our overall productivity, automated repetitive tasks, and, in my case, created informative and educational content. That said, GenAI (Generative AI) still has a long way to go and shouldn't be fully relied upon for any given task.
As a developer, you don't need to be an expert in AI or ML to build cool stuff. There are plenty of tools you can use to leverage the power of AI and integrate it into your projects. In this article, I will walk you through langchainjs, a framework for developing applications powered by Large Language Models (LLMs). I will guide you on how to build a simple Node.js API that takes user input and interacts with the Gemini model to generate a response.
This is not a Node.js API tutorial but rather an introduction to incorporating GenAI into the API. If you're new to Node.js, you can check out this article I've written on the topic.
To get started,
- initialize a node project and run the following command:
npm i @langchain/google-genai dotenv express
- Create a
.env
file at the root of your project and add the following code:
GOOGLE_API_KEY= _paste your gemini api key here_
_You can get your own API Key for free here_
I have a simple nodejs API that receives a user's prompt and returns a response:
This code sets up an Express server that uses the Google Generative AI model to handle POST requests at the /api/prompts
endpoint. It receives a prompt from the request body, generates a response using the AI model, and sends the response back to the client. If an error occurs, it sends a 500 status with the error message. The API key and model are configured via environment variables.
Now let's implement the generateResponse
method.
The generateResponse
function asynchronously invokes the AI model with a given prompt. It logs and returns the generated response content. If an error occurs, it logs the error and returns the error message.
Here's the entire code:
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import 'dotenv/config'
import express from "express";
const model = new ChatGoogleGenerativeAI({
model: "gemini-pro",
maxOutputTokens: 2048
})
const app = express()
app.use(express.json())
app.post('/api/prompts', async(req, res) => {
const { prompt } = req.body
try {
const response = await generateResponse(prompt)
res.status(200).json({response: response})
} catch (error) {
res.status(500).json({ error: error.message})
console.error(error)
}
})
async function generateResponse(prompt) {
try {
const response = await model.invoke(prompt)
console.log(response.content)
return response.content
} catch (error) {
console.error(error);
return error.message
}
}
app.listen(4000, () => {
console.log('SERVER RUNNING ON PORT:4000')
})
Let's test it out on Postman, my server is running on port 4000 so if i send a post request with the prompt, i should get back an answer from the Gemini Model.
Here's the response:
Conclusion
Generative AI has transitioned from fantasy to a valuable tool, enhancing productivity, automating repetitive tasks, and generating informative content. However, while GenAI is powerful, it shouldn't be solely relied upon for any task. As developers, we can leverage frameworks like langchainjs to integrate AI into our projects without being AI or ML experts. This tutorial demonstrated how to build a Node.js API that interacts with the Gemini model to generate responses based on user input.
Now that you have seen how to integrate Generative AI into a Node.js application, why not try it yourself? Start by setting up your project, and experiment with different prompts to see the varied responses the AI can generate. Don’t forget to share your experiences and projects with the community!
Subscribe to my newsletter to receive detailed tutorials every week and stay updated.
Top comments (2)
Can you provide more details on how to handle real-time data processing when integrating AI into a Node.js application?
Real-time data processing is a broad topic, could you be more specific or provide a use case scenario?