DEV Community

Liam Stone
Liam Stone

Posted on • Originally published at boldercloud.com.au on

GPT-3.5 Fine Tuning: Unlock the True Potential with This Comprehensive Guide

In today’s digital age, where the art of conversational AI and Large Language Models (LLMs)is reaching new heights. The GPT-4 model (and it’s big brother ChatGPT) stands as a testament to our progress. This remarkable iteration from OpenAI, part of the illustrious ChatGPT lineage, showcases the cumulative learnings from its predecessors and pushes the envelope even further.

As with all groundbreaking technologies (particularly in AI), there’s always room for improvement. This brings us to the thrilling anticipation of fine-tuning for GPT-3.5. This update promises to amplify the already formidable prowess of GPT-3.5 Turbo, ushering in a new era where customization meets raw computational power. With fine-tuning, GPT-3.5 is expected to achieve a heightened sense of adaptability, catering to more specific tasks while retaining the broad knowledge base it is renowned for.

As we delve further into the intricacies of fine-tuning, remember that our ultimate goal is to make AI more accessible, efficient, and aligned with your unique needs. Stay tuned for an enlightening journey ahead!

Why Fine Tuning?

Fine-tuning represents an answer to some of the most pressing demands of the modern developer community. Let’s delve deeper into the whys of this advancement.

The Quest for Customization: Developers’ Call for Model Adaptability

In the dynamic landscape of AI, one size doesn’t fit all. Developers have been on a relentless pursuit for models that not only understand human language at its core but also adapt to unique requirements and contexts. To date this has mostly centred around Retrieval Augmented Generation (RAG). This involves storing and retrieving a custom dataset to inform the context of interactions. Imagine, instead, a language model that could adjust its tone to the playful whimsy of a toy brand or the stern professionalism of a legal firm. That’s the power of customization. Fine-tuning answers this call, offering developers the tools they need to mold AI in alignment with their distinct visions and goals.

Early Results: How Fine-Tuning GPT-3.5 Turbo Matches or Surpasses Base GPT-4 Capabilities in Specific Tasks

As impressive as the base GPT-4 model is, early tests have unveiled a tantalizing possibility. When fine-tuned, the GPT-3.5 Turbo can rise to meet, and occasionally even exceed, the base GPT-4’s capabilities in certain narrow tasks. These results are a testament to the potential of fine-tuning. It’s not merely about refining the model; it’s about honing it to such an extent that it achieves performance metrics previously considered unattainable for its tier.

Data Privacy Assurance: Emphasizing OpenAI’s Stance on Data Ownership and Usage

In an era where data privacy concerns are paramount, OpenAI’s stance on the matter is clear and reassuring. With fine-tuning, every piece of data sent in and out of the API remains the sole property of the customer. OpenAI does not utilize this data to train other models, ensuring a firm commitment to user privacy. This data-centric integrity ensures that while developers harness the power of fine-tuning, they can do so with the peace of mind that their data remains uncompromised.

The Benefits and Use Cases of Fine Tuning

Delving into the world of AI often feels like navigating a vast ocean of possibilities. And while the immense power and potential of models like GPT-3.5 Turbo are evident, fine-tuning emerges as a beacon that illuminates specific paths within this expansive realm. Let’s dive into the tangible benefits and real-world use cases of fine-tuning, revealing how it’s not just a boon for developers, but a game-changer for businesses and users alike.

Improved Steerability: Achieving Better Control and Response Consistency

  • Why it matters: Picture steering a massive ship. The greater control you have over its direction, the smoother and safer the journey. Similarly, in the realm of AI, steerability refers to the ability to guide the model’s responses more precisely, ensuring it behaves as intended.
  • Real-world application — German Language Prompt: Consider an e-commerce platform catering to a German audience. Every query, from product details to customer support, needs a prompt and accurate response in German. With fine-tuning, developers can ensure that whenever the model is prompted in German, it consistently replies in the same language, enhancing user experience and maintaining linguistic coherence.

Reliable Output Formatting: Importance in Applications and Practical Examples

  • Why it matters: Consistency is key, especially when it comes to tasks that demand specific formatting. Without it, integrating AI outputs into existing systems can be challenging, leading to inefficiencies and potential errors.
  • Real-world application — JSON Snippet Creation: Imagine a developer tool that assists in crafting API calls. Users provide a rough idea or request, and the tool generates a well-structured JSON snippet. Fine-tuning ensures that the model’s outputs are formatted consistently, making the conversion from user prompts to usable JSON snippets seamless. This reliable output reduces the overhead for developers and accelerates integration processes.

Crafting a Custom Tone: Aligning the Model’s Output with Brand Voices

  • Why it matters: A brand’s voice is its identity. It’s how businesses communicate their values, ethos, and personality. In an age where AI is becoming an integral part of customer interactions, ensuring that the AI’s tone aligns with the brand’s voice is crucial for maintaining brand integrity and resonance.
  • Real-world application: Take a luxury watch brand known for its legacy and sophisticated elegance. Their communication is often imbued with a sense of timeless grace. Through fine-tuning, the brand can ensure that any AI-driven interaction-be it chat support, email responses, or product descriptions-echoes this refined tone, offering customers a consistent and immersive brand experience.

In essence, fine-tuning is akin to sculpting. It begins with a robust, powerful model and, through meticulous adjustments, shapes it to fit specific needs and visions. The above benefits and use cases underscore the transformative potential of fine-tuning, bridging the gap between generalized AI capabilities and specialized, high-impact applications.

Fine Tuning Your Own Model

So now that we’ve looked at all the benefits let’s get stuck in to fine tuning our old model. This tutorial will be conducted in NodeJS so you’ll need to have that installed. You can do that by downloading the package and installing here. The code for everything that we work through is available at the following Github repository.

I’ll be working in VS Code but have provided the code snippets below which should work no matter what IDEyou are working in.

Configuration

Let’s start of by making a directory to work in, changing into that directory and initialising a new Node JS project. We’ll then open VS Code in this directory (you should just see a package.json file) and create a new index.js, .env and our training file (we’ll come back to the detail on these shortly):

mkdir gpt-3.5-fine-tuning 
cd gpt-3.5-fine-tuning 
npm init -y 
code . 
New-Item -Path .\index.js -ItemType File 
New-Item -Path .\.env -ItemType File 
New-Item -Path .\style-and-tone.jsonl -ItemType File
Enter fullscreen mode Exit fullscreen mode

The .env file is where we need to store the API key to be able to access the API and train our custom model. You’ll need to retrieve the key from the settings in your account.

Copy this key as follows:

OPENAI_API_KEY= <YOUR KEY GOES HERE>
Enter fullscreen mode Exit fullscreen mode

Now we are going to install some dependencies:

npm i openai dotenv
Enter fullscreen mode Exit fullscreen mode

And update our package.json file to include the type “module”. Your final package.json file should look like this.

{
  "name": "64-gpt-3.5-fine-tuning",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "dependencies": {
    "dotenv": "^16.3.1",
    "openai": "^4.2.0"
  },
  "type": "module"
}
Enter fullscreen mode Exit fullscreen mode

Now let’s look into our style-and-tone.jsonl file. What is happening with this? Basically what goes on here is that we want to train our model to respond in a certain way. This might be official, casual, helpful, annoying, sarcastic… The possibilities truly are endless. So to train the model in this way what we do is provide some sample interactions with the ChatBot in JSON format.

There are examples of this in the documentation but what I did is just use GPT-4 with Code Interpreter to generate me a sample file. This is the prompt I used:

"I am creating some training data for fine tuning an ai model. 
This is the sample of what one instance of training looks like. 
I would like 10 instances like this. It needs to be in JSONL format. 
Please make is so that my chatbot always answers in the style of Snoop Dogg. 
{ "messages": [ { "role": "system", "content": 
"You are an assistant that occasionally misspells words" }, 
{ "role": "user", "content": "Tell me a story." }, 
{ "role": "assistant", "content": "One day a student went to schoool." } ] } 
Provide the file for download.
Enter fullscreen mode Exit fullscreen mode

And I was given the data available at this Github Gist! You can see that what has been created is a set of sample interactions where the response is always in the style of Snoop-Dogg (and they are quite amusing to read as well. If you’re creating this data yourself there are some great JSON validators/formatters that you can use to ensure that your training JSON is correctly formatted.

NOTE: It’s important to note here that you need AT LEAST 10 examples to fine-tune otherwise you’ll get an error when you go to fine-tune your data.

Uploading GPT-3.5 Fine Tuning Training Data

Ok so now we are ready to start putting some code into our index.js file. First things first let’s do some imports for the packages that we will need to use:

import OpenAI from "openai";
import fs from "fs";
import dotenv from "dotenv";
dotenv.config();
const openai = new OpenAI(process.env.OPENAI_API_KEY);
Enter fullscreen mode Exit fullscreen mode

This first bit of code we run at the command line with “node index.js”

await openai.files.create({ 
  file: fs.createReadStream("style-and-tone.jsonl"), 
  purpose: "fine-tune", 
});
Enter fullscreen mode Exit fullscreen mode

Now we will comment on that section of code and check that the files have been uploaded. Run “node index.js” for your file which should now look like this:

import OpenAI from "openai";
import fs from "fs";
import dotenv from "dotenv";
dotenv.config();
const openai = new OpenAI();

// await openai.files.create({
// file: fs.createReadStream("style-and-tone.jsonl"),
// purpose: "fine-tune",
// });

const files = await openai.files.list();
console.log(files);
Enter fullscreen mode Exit fullscreen mode

Perfect. Now at the console you should see an output which indicates that your training data has been uploaded. You’ll need to retrieve the file ID which is highlighted below (the output coming from the terminal).

GPT-3.5 Fine Tuning File Confirmation

Time to Fine-Tune!

Ok! So now let’s comment out that piece of code and we are goign to run the following snippet to fine tune our GPT-3.5 model. We will run “node index.js” on the following:

import OpenAI from "openai";
import fs from "fs";
import dotenv from "dotenv";
dotenv.config();

const openai = new OpenAI();
const fineTune = await openai.fineTunes
  .create({
    training_file: "file-hJXe81Sn2V7X4EA14K7srsXm",
    model: "gpt-3.5-turbo-0613",
  })
  .catch((err) => {
    if (err instanceof OpenAI.APIError) {
      console.error(err);
    } else {
      throw err;
    }
  });
Enter fullscreen mode Exit fullscreen mode

Unfortunately it seems as though the SDK is still being updated so although this code SHOULD work you’ll end up with an error on the commmand line with the following message:

'Invalid base model: 
gpt-3.5-turbo-0613 (model must be one of ada, babbage, curie, davinci) 
or a fine-tuned model created by your 
organization: org-ebRiZ9NCNAPrrTfR1jKdmsZh'
Enter fullscreen mode Exit fullscreen mode

It seems that the OpenAI SDK may not yet be updated but that’s ok! We can just make a fetch call directly to the API using the following code. I know this is annoying but I just want to highlight the SDK error at the time of writing. Run “node index.js” on the code below:

import OpenAI from "openai";
import fs from "fs";
import dotenv from "dotenv";
dotenv.config();

const requestData = {
  training_file: "file-hJXe81Sn2V7X4EA14K7srsXm",
  model: "gpt-3.5-turbo-0613",
};

const headers = {
  "Content-Type": "application/json",
  Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
};

fetch("https://api.openai.com/v1/fine_tuning/jobs", {
  method: "POST",
  headers: headers,
  body: JSON.stringify(requestData),
})
  .then((response) => response.json())
  .then((data) => console.log(data))
  .catch((err) => console.log(err));
Enter fullscreen mode Exit fullscreen mode

And boom, that is it. We have now submitted our job for fine-tuning. Your command line output should look something like this:

Now we have to wait for an email confirmation stating that our fine tuning job is complete. You will be given a model identifier that you can then use for your API calls in the future.

GPT-3.5-fine-tuning email confirmation

Testing GPT -3.5 Fine Tuned Model

So the cool thing here is that we can actually test out how our model is working by going into the Open AI playground. Like the other models we can select it and then interact with it. Here is an example of interacting with my fine-tuned SnoopGPT model. You can see that I have had to prompt it to respond like Snoop Dogg, it just does it!

GPT-3.5-fine-tuning-test

This is definitely a silly example but you can see how our model is starting to emulate some of the Snoop lingo. This is with a mere ten training examples but you can see that the more examples with high quality is going to generate some pretty good responses for us.

If you’re interested in more detail on fine tuning checkout the original post from OpenAI here. There is more detail in the docs about GPT-3.5 Fine Tuning from a technical perspective as well as details on the costs.

All-Up GPT-3.5 Fine Tuning

The digital realm is teeming with tools, technologies, and potentialities. Among them, AI has always been a beacon of transformative power. However, the real beauty of such advanced tools like LLMs and the GPT models doesn’t merely lie in their vast capabilities, but in the promise of molding them to our unique visions and needs. This is the essence of fine-tuning-taking the robustness of generalized AI and tailoring it with precision to resonate with specific voices, tones, and tasks.

The journey we embarked on in this tutorial-crafting a model to echo the unmistakable vibes of Snoop Dogg-is emblematic of this promise. It’s not just about technical achievement; it’s a testament to the creative agency we wield in the age of AI. Although a simple example we can see with the power of fine-tuning, developers aren’t just passive users but active sculptors, shaping AI’s vast potential to resonate with specific cultural icons, brand voices, or even individual quirks.

If you’re look for assistance on customising models BolderCloudcan help. We love working with web and AI solutions. Check out our serviceshere and enjoy!

Originally published at https://boldercloud.com.au on August 24, 2023.

Top comments (0)