The world of large langauge models is evolving at a pace that's difficult to keep up with. Just about every day something new an innovative comes around and gradually closes the gap to an eventual Skynet takeover. What better thing to do in the meantime then work to understand how to create applications around these models.
In this post we'll touch on Langchain, a new library for harnessing the power of large language models (LLMs) and extended their ability beyond asking them what to make for dinner.
I'll pause here a moment to say the following... This is by no means and advanced tutorial. I've tried to be as prescriptive as possible so that someone with some pretty basic programming can get started. There are some pretty amazing applications out there that people are building but this stuff is the first piece of the puzzle.
So, let's crack those knuckles, pour an unnecessarily large cup of coffee, and prepare to dive into the thrillingly dry world of coding, where we'll start by summoning a script that connects us to the OpenAI API. Trust me, it's more exciting than it sounds!
What is Langchain?
Described in it's documentation here Langchain is an emerging library available in both Python and Javascript that extends the functionality of LLMs. Specifically, it aims at allowing models to connect to large datasets and also be agentic (that is, interact with other systems). It does this, as the name suggest, by chaining together modules all with different functionality.
Why is this powerful? Well, a lot of LLMs don't allow us to customise/focus the knowledge base they are trained on and there are also limitations in the size of the context that can be fed to inform responses. Langchain provides a series of modules that enables us to to tackle these challenges. But we're getting ahead of ourselves, let's start of really simple...
What we'll cover
The inspiration for this post (as a fairly new coder myself and new to the world of LLMs) is to describe a fairly simple use case where we'll setup a Next JS project and start off by running a basic script via the CLI to connect to the prompt. We'll cover:
- Setting up a Next JS project.
- TS-Node for running scripts at the command line.
- The Langchain prompt module.
- Setting up and running a simple script.
Getting Started with Next JS
Next.js is an open-source, React-based framework for building server-side rendered and static web applications. Developed by Vercel, it's known for its hybrid static and server rendering, pre-rendering, and automatic code splitting capabilities, which enable fast, optimized, and scalable applications. With out-of-the-box support for TypeScript, easy data-fetching methods, built-in CSS and Sass, and a robust routing system, Next.js provides developers with a comprehensive toolkit for crafting modern web applications. Whether you're building a simple website or a complex, data-rich web application, Next.js provides a streamlined, developer-friendly environment that simplifies the process and enhances performance.
We'll use the ubiquitous Integrate Development Environment (IDE) Visual Studo (VS) Code to get started. I'll assume you have the lates version of Node JS working but if not get started here.
Navigate to the folder you want to work in and enter the following per the Next JS docs. Just accept all the defaults at the moment. They'll do for now (you'll note that it now includes the NON-experimetail app directory for routing and also Tailwind CSS which is pretty cool!!).
C:\Apps> npx create-next-app@latest
Next we'll navigate into our app folder (I've called mine langchain-starter) and install both the langchain and ts-node libraries.
C:\Apps>cd langchain-starter
C:\Apps\langchain-starter> npm install --save langchain
C:\Apps\langchain-starter> npm install --save-dev ts-node
We've talked about langchain already but the ts-node package provides TypeScript execution and REPL for node.js.
Now that you've got your Next app up and running if you go enter
C:\Apps\langchain-starter>npm run dev
You should get the Next JS template available locally. If you want to edit anything on this page you can access the page.tsx in the src directory (which is in the root of your app directory). We aren't going to worry about this just yet though!!
The tsconfig.json file
In your tsconfig file (in the root directory) please make sure you change the module in compiler options to "es6". It should look like this:
{
"compilerOptions": {
...,
"module": "es6",
...
}
}
Open AI API Key
In order to interact with the API you'll need to go to the Open AI page and make an account. If you navigate to your account settings you should see an option to grab an API Key. Keep this key hidden! There's an excellent write up here from FreeCodeCamp on API Key management. A must know for new developers.
Creating our Script
Navigate to the source directory from the root and then make a new directory and file that we'll put our code in.
C:\Apps\langchain-starter>cd src
C:\Apps\langchain-starter\src>mkdir utils
C:\Apps\langchain-starter\src>cd utils
C:\Apps\langchain-starter\src\utils>type basic_call.ts
You can also do all of the above from the VS Code explorer side bar but the CLI is king so we'll do our best to use it where possible!
Ok, so now let's open our basic_call.ts file in the VS Code editor and enter the code below:
const { OpenAI } = require("langchain/llms/openai");
const run = async () => {
const model = new OpenAI({
openAIApiKey: <YOUR_API_KEY>,
temperature: 0,
});
try {
const res = await model.call(
"How long would it take to drive from Sydney to Perth?"
);
console.log({ res });
} catch (e) {
console.log(e);
}
};
//execute anon function
(async () => {
await run();
})();
Ok, for some this may seem pretty straightforward but if you're new like me it can seem overwhelming so we'll walk through it step by step.
The first line calls the Open AI class from the Langchain library.
Next we create an asynchronous function (since we will need it to wait for things from our external API call).
Then, we're defining an asynchronous function run(). Within this function, we instantiate a new OpenAI object with an API key and a "temperature" parameter. You will need to enter your key here that we got earlier.
The temperature parameter I like to think of as how imaginative Open AI will be. A value of 0 means that it's deterministic (as in we'll get the same answer for every question, boring in creative circumstances). A value close to 1 means that Open AI will choose more varied completions which tend to be more creative.
Next, we're using a try-catch statement to handle any errors that may occur during the API call. We make a request to the OpenAI API with a prompt ("How long would it take to drive from Sydney to Perth?") using the call method of our OpenAI object. This returns a promise, so we use the await keyword to wait for it to resolve. If the API call is successful, we log the response; if it fails, we catch the error and log that instead.
Finally, we immediately invoke the run() function inside an anonymous asynchronous function. This is necessary because the await keyword can only be used inside an async function, and we want to ensure that the run() function has finished executing before anything else happens.
That's all a bit much isn't it? So...
TL:DR;
We call a pre-built class that allows us to input parameters. Then we call that with our question, and make sure we catch any errors.
Ok!! So how do we actually run this? Well that's where our CLI and ts-node come in.
Navigate back to your root directory and run the following:
C:\Apps\langchain-starter> ts-node ./src/utils/basic_call.ts
It should take a few seconds but you'll get a respnse at the command line something like this...
{
res: '\n' +
'\n' +
'The driving time from Sydney to Perth is approximately 4 days and 3 nights.'
}
You can change your prompt and the temperature to see the different responses. Check out the Langchain documentation for other parameters that can be passed in.
Is That It?
This might seem pretty straightforward and for many out there that are advanced at this stuff it is. It might seem silly to run something like this at the command line when you can simply go over to Chat GPT and ask it stuff. But if you're just getting started what I'm hoping to do is continue and show you how we can put these building blocks that Langchain provides together so you can see how they can be utilised in your own applications!
Finally
So we've covered a little bit about getting setup with Next JS, TS and Langchain. There are many more ways to extend this functionality to customise our bots to build some cool applications like Chat PDF where we can use embeddings to feed our bots with better context on the things we want it to help us with!
Let me know your ideas and how you're integrating this stuff into your own development journeys.
If you have any questions feel free to get in touch and if you read this far thank you!
Top comments (0)