DEV Community

Luis Beltran
Luis Beltran

Posted on

Fine-tuning an Open AI model with Azure and C#

This publication is part of the C# Advent Calendar 2023, an initiative led by Matthew Groves. Check this link for more interesting articles about C# created by the community.

In preparation to my upcoming participation at the Global AI Conference 2023 with the topic Fine-tuning an Azure Open AI model: Lessons learned, let's see how to actually customize a model with your own data using Azure Open AI and C#.

First of all, a definition

I like the definition presented here. Fine-tuning is:

the process that takes a model that has already been trained for one given task and then tunes or tweaks the model to make it perform a second similar task.

It is a way of applying transfer learning, a technique that uses knowledge which was gained from solving one problem and applies it to a new but related problem.

Azure Open AI and Fine-tuning

Azure Open AI is a cloud-based platform that enables everyone to build and deploy AI models quickly and easily. One of the capabilities of this service is fine-tuning pre-trained models with your own datasets. Some advantages include:

  • Achieving better results than prompt-engineering.
  • Needing less text sent (thus, fewer tokens are processed on each API call)
  • Saving costs, improving request latency.

What do you need?

  • An Azure subscription with access to Azure Open AI services.
  • An Azure Open AI resource created in one of the supported regions for fine-tuning, with a supported deployed model.
  • The Cognitive Services OpenAI Contributor role.
  • The most important element to consider: Do you really need to fine-tune a model? I'll discuss about it during my talk next week, for the moment you can read about it here.

Let's fine-tune a model using C#.


  1. Create an Azure Open AI resource.
  2. Prepare and upload your data.
  3. Train the model.
  4. Wait until the model is fine-tuned.
  5. Deploy your custom model for use.
  6. Use it.

Let's do it!

Step 1. Create an Azure Open AI resource

Use the wizard to create an Azure Open AI resource. You only need to be careful about the region. Currently, only North Central US and Sweden Central support the fine-tuning capability, so just choose any of them.

Azure Open AI resource in North Central US region

Once the model is created, get the key, region, and endpoint information that will be included in the requests:

Key, region, and endpoint of Azure Open AI model

In your code, set the BaseAddress of an HttpClient instance to the Azure Open AI resource's endpoint and add an api-key Header to the client. For example:

HttpClient client = new();
client.BaseAddress = new ("your-endpoint");
client.DefaultRequestHeaders.Add("api-key", "your-key");
Enter fullscreen mode Exit fullscreen mode

Step 2. Prepare and upload your data.

You must prepare two datasets: one for training and a second one for validation. They each contain samples of inputs and its expected output in JSONL (JSON Lines) format. However, depending on the base model that you deployed, you will need specific properties for each element:

  • If you are fine-tuning recent models, such as GPT 3.5 Turbo, here's an example of the file format.
{"messages": [{"role": "system", "content": "You are a helpful recipe assistant. You are to extract the generic ingredients from each of the recipes provided."}, {"role": "user", "content": "Title: No-Bake Nut Cookies\n\nIngredients: [\"1 c. firmly packed brown sugar\", \"1/2 c. evaporated milk\", \"1/2 tsp. vanilla\", \"1/2 c. broken nuts (pecans)\", \"2 Tbsp. butter or margarine\", \"3 1/2 c. bite size shredded rice biscuits\"]\n\nGeneric ingredients: "}, {"role": "assistant", "content": "[\"brown sugar\", \"milk\", \"vanilla\", \"nuts\", \"butter\", \"bite size shredded rice biscuits\"]"}]}
{"messages": [{"role": "system", "content": "You are a helpful recipe assistant. You are to extract the generic ingredients from each of the recipes provided."}, {"role": "user", "content": "Title: Jewell Ball'S Chicken\n\nIngredients: [\"1 small jar chipped beef, cut up\", \"4 boned chicken breasts\", \"1 can cream of mushroom soup\", \"1 carton sour cream\"]\n\nGeneric ingredients: "}, {"role": "assistant", "content": "[\"beef\", \"chicken breasts\", \"cream of mushroom soup\", \"sour cream\"]"}]}
Enter fullscreen mode Exit fullscreen mode

Please notice that for each item (line) you provide a messages element containing an array of role-content pairs for the system (the behavior), user (the input), and assistant (the output).

  • On the other hand, if you are fine-tuning older models (such as Babbage or Davinci), here's a sample file format that works with both of them:
{"prompt": "You guys are some of the best fans in the NHL", "completion": "hockey"}
{"prompt": "The Red Sox and the Yankees play tonight!", "completion": "baseball"}
{"prompt": "Pelé was one of the greatest", "completion": "soccer"}
Enter fullscreen mode Exit fullscreen mode

You can notice that each element contains a prompt-completion pair, representing the input and the desired output which we'd like to be generated by the fine-tuned model.

More information about JSON Lines can be found here.

In order to generate a JSONL file, there are several approaches:

  • Manual approach: Write an application that creates a text file (with .jsonl extension), then loop over your data collection and serialize each item into a JSON string (don't forget that you need specific properties). Write each JSON string into a new line of the recently created file.

  • Library approach: Depending on the programming language you are using, it's highly probable that there exists some libraries which can export your data in JSONL format. For example, jsonlines for Python.

  • Website approach: There are some websites which can convert your Excel, SQL, CSV (and others) data into JSON Lines format, for example Table Convert or Code Beautify.

Now, you need to provide a JSONL file, which serves as the training dataset. You can either add a local file in your project or use the URL of a public online resource (such as an Azure blob or a web location).

For this example, I have chosen two local JSONL file which contain examples of a helpful virtual assistant that extracts generic ingredients from a provided recipe:

JSONL local files

This is the code of a function that you can use to upload a file into Azure Open AI:

async Task<string> UploadFile(HttpClient client, string folder, string dataset, string purpose)
    var file = Path.Combine(folder, dataset);
    using var fs = File.OpenRead(file);
    StreamContent fileContent = new(fs);
    fileContent.Headers.ContentType = new MediaTypeHeaderValue("application/json");
    fileContent.Headers.ContentDisposition = new ContentDispositionHeaderValue("form-data")
        Name = "file",
        FileName = dataset

    using MultipartFormDataContent formData = new();
    formData.Add(new StringContent(purpose), "purpose");

    var response = await client.PostAsync("openai/files?api-version=2023-10-01-preview", formData);
    if (response.IsSuccessStatusCode)
        var data = await response.Content.ReadFromJsonAsync<FileUploadResponse>();

    return string.Empty;
Enter fullscreen mode Exit fullscreen mode

Then, you can call the above method twice to upload both the training and the validation datasets:

var filesFolder = "Files";
var trainingDataset = "recipe_training.jsonl";
var validationDataset = "recipe_validation.jsonl";
var purpose = "fine-tune";

var line = new String('-', 20);
Console.WriteLine("***** UPLOADING FILES *****");
var trainingDsId = await UploadFile(client, filesFolder, trainingDataset, purpose);
Console.WriteLine("Training dataset: " + trainingDsId);

var validationDsId = await UploadFile(client, filesFolder, validationDataset, purpose);
Console.WriteLine("Validation dataset: " + validationDsId);

await Task.Delay(10000);
Enter fullscreen mode Exit fullscreen mode

This is the corresponding output:

Uploading datasets to Azure Open AI

By the way, here are some characteristics of JSONL:

  • Each line is a valid JSON item
  • Each line is separated by a \n character
  • The file is encoded using UTF-8

Moreover, for Open AI usage, the file must include a byte-order mark (BOM).

Step 3. Train the model

In order to train a custom model, you need to submit a fine-tuning job. The following code sends a request to the Azure Open AI service:

async Task<string> SubmitTrainingJob(HttpClient client, string trainingFileId, string validationFileId)
    TrainingRequestModel trainingRequestModel = new()
        model = "gpt-35-turbo-0613",
        training_file = trainingFileId,
        validation_file = validationFileId,

    var requestBody = JsonSerializer.Serialize(trainingRequestModel);
    StringContent content = new(requestBody, Encoding.UTF8, "application/json");

    var response = await client.PostAsync("openai/fine_tuning/jobs?api-version=2023-10-01-preview", content);

    if (response.IsSuccessStatusCode)
        var data = await response.Content.ReadFromJsonAsync<TrainingResponseModel>();

    return string.Empty;
Enter fullscreen mode Exit fullscreen mode

However, this task will take some time. You can check the status of the job with the following code:

async Task<TrainingResponseModel> CheckTrainingJobStatus(HttpClient client, string trainingJobId)
    var response = await client.GetAsync($"openai/fine_tuning/jobs/{trainingJobId}?api-version=2023-10-01-preview");

    if (response.IsSuccessStatusCode)
        var data = await response.Content.ReadFromJsonAsync<TrainingResponseModel>();
        return data;

    return null;
Enter fullscreen mode Exit fullscreen mode

Then, you can call both methods to submit a fine-tuning training job and poll the training job status every 5 minutes until it is complete:

Console.WriteLine("***** TRAINING CUSTOM MODEL *****");
var trainingJobId = await SubmitTrainingJob(client, trainingDsId, validationDsId);
Console.WriteLine("Training Job Id: " + trainingJobId);

string? fineTunedModelName;
var status = string.Empty;

    var trainingStatus = await CheckTrainingJobStatus(client, trainingJobId);
    Console.WriteLine(DateTime.Now.ToShortTimeString() + ". Training Job Status: " + trainingStatus.status);
    fineTunedModelName = trainingStatus.fine_tuned_model;
    status = trainingStatus.status;
    await Task.Delay(5 * 60 * 1000);
} while (status != "succeeded");

Console.WriteLine("Fine-tuned model name: " + fineTunedModelName);
Enter fullscreen mode Exit fullscreen mode

Here is a sample output:

Fine-tuning training job

Step 4. Wait until the model is fine-tuned.

Training the model will take some time depending on the amount of data provided, the number of epochs, the base model, and other parameters selected for the task. Furthermore, since your job enters into a queue, the server might be handling other training tasks, causing that the process is delayed.

Once you see that the Status is succeeded, it means that your custom, fine-tuned model has been created! Well done!

Training Job complete

However, an extra step is needed before you can try using it. You can see, by the way, that we read the fine_tuned_model property each time we check the training job status. Why? Because once the job is complete, it will contain the custom model name, a unique value that identifies it from other elements in our resource. We will need it in the next step.

Step 5. Deploy your custom model for use.

The fine-tuned model must be deployed for its use. This task involves a separate authorization, a different API path, and a different API version. Moreover, you need some data from your Azure resource:

  • Subscription ID
  • Resource Group
  • Resource Name

You can get the above information from the Overview panel of the Azure Open AI resource created at the beginning:

Azure resource information

Additionally, you need an authorization token from Azure. For testing purposes, we can launch the Cloud Shell from the Azure portal and run az account get-access-token.

Getting an authorization token from Azure

Recommendation: Get the token later, because it expires after one hour. Fine-tuning the model might take more than one hour to complete. It is better to get the token once you actually need it: when the model has completed its training.

Let's create a function that sends a deployment model request to Azure. Please notice that here we send a PUT request even though the documentation mentions POST. I went to the source to solve this:

async Task<string> DeployModel(HttpClient client, string modelName, string deploymentName, string token, string subscriptionId, string resourceGroup, string resourceName)
    var requestUrl = $"subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.CognitiveServices/accounts/{resourceName}/deployments/{deploymentName}?api-version=2023-10-01-preview";
    var deploymentRequestModel = new DeploymentRequestModel()
        sku = new(),
        properties = new() { model = new() { name = modelName } }

    var requestBody = JsonSerializer.Serialize(deploymentRequestModel);
    StringContent content = new(requestBody, Encoding.UTF8, "application/json");

    var response = await client.PutAsync(requestUrl, content);

    if (response.IsSuccessStatusCode)
        var data = await response.Content.ReadFromJsonAsync<DeploymentResponseModel>();

    return string.Empty;
Enter fullscreen mode Exit fullscreen mode

The task takes some time to complete, so you can track the status with this code:

async Task<string> CheckDeploymentJobStatus(HttpClient client, string id)
    var response = await client.GetAsync($"{id}?api-version=2023-10-01-preview");

    if (response.IsSuccessStatusCode)
        var data = await response.Content.ReadFromJsonAsync<DeploymentJobResponseModel>();

    return string.Empty;
Enter fullscreen mode Exit fullscreen mode

Now, let's ask the user for a token before calling both methods. Once all parameters are set, the deployment job can be submitted and tracked.

var deploymentName = "ingredients_extractor";
string subscriptionId = "your-azure-subscription";
string resourceGroup = "your-resource-group";
string resourceName = "your-resource-name";
Console.WriteLine("***** ENTER THE TOKEN *****");
string token = Console.ReadLine();

HttpClient clientManagement = new();
clientManagement.BaseAddress = new("");
clientManagement.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", token);

Console.WriteLine("***** DEPLOYING CUSTOM MODEL *****");
var deploymentJobId = await DeployModel(clientManagement, fineTunedModelName, deploymentName, token, subscriptionId, resourceGroup, resourceName);
Console.WriteLine("Deployment ID: " + deploymentJobId);

var deploymentStatus = string.Empty;

    deploymentStatus = await CheckDeploymentJobStatus(clientManagement, deploymentJobId);
    Console.WriteLine(DateTime.Now.ToShortTimeString() + ". Deployment Job Status: " + deploymentStatus);
    await Task.Delay(5 * 60 * 1000);
} while (deploymentStatus != "Succeeded");
Enter fullscreen mode Exit fullscreen mode

The generated output is displayed below. When you test the application, the moment it asks you for a token is the best time to go to the Azure CLI to grab an auth token.

Entering the token from Azure

Deploying a fine-tuned model

When the job finishes (Status = Succeeded), you are ready to use your custome model.

Step 6. Use it.

You can use the deployed fine-tuned model for inference anywhere: In an application that you develop, in the Playground, as part of an API request, etc. For example, create the following method:

async Task<string> GetChatCompletion(HttpClient client, string deploymentName, string systemMessage, string userInput)
    ChatCompletionRequest chatCompletion = new()
        messages =
            new() { role = "system", content = systemMessage },
            new() { role = "user", content = userInput }

    var requestBody = JsonSerializer.Serialize(chatCompletion);
    StringContent content = new StringContent(requestBody, Encoding.UTF8, "application/json");

    var response = await client.PostAsync($"openai/deployments/{deploymentName}/chat/completions?api-version=2023-10-01-preview", content);

    if (response.IsSuccessStatusCode)
        var data = await response.Content.ReadFromJsonAsync<ChatCompletionResponse>();
        return data.choices.First().message.content;

    return string.Empty;
Enter fullscreen mode Exit fullscreen mode

Then, call it with the following arguments:

Console.WriteLine("***** USING CUSTOM MODEL *****");
var systemMessage = "You are a helpful recipe assistant. You are to extract the generic ingredients from each of the recipes provided";
var userMessage = "Title: Pancakes\n\nIngredients: [\"1 c. flour\", \"1 tsp. soda\", \"1 tsp. salt\", \"1 Tbsp. sugar\", \"1 egg\", \"3 Tbsp. margarine, melted\", \"1 c. buttermilk\"]\n\nGeneric ingredients: ";
Console.WriteLine("User Message: " + userMessage);

var inference = await GetChatCompletion(client, deploymentName, systemMessage, userMessage);
Console.WriteLine("AI Message: " + inference);
Enter fullscreen mode Exit fullscreen mode

Here is the result:

Using a fine-tuned model

The source code is available at my GitHub repository. You might have noticed that in the code I used some Models that I did not define here in this post, such as FileUploadResponse, ChatCompletionRequest, or Messages, among others. Just see their definitions at the Models folder in the source code.

Application models

As you can see, the process for fine-tuning an Open AI model using C# is quite straightforward (although you need lots of code :) ) and it offers several benefits. However, you should also consider if this is the best solution for your needs. Join my session at the Global AI Conference later this month to learn more about it!

Fine-tuning an Azure Open AI model, lessons learned

Well, this was a long post but hopefully, it was also useful for you. Remember to follow the rest of the interesting publications of the C# Advent Calendar 2023. You can also follow the conversation on Twitter with the hashtag #csadvent.

Thank you for reading. Until next time!


Top comments (2)

junilopagobo profile image
Junilo Pagobo

I appreciate that this article is about training a base model, e.g. "gpt-35-turbo". What if on top of the base model, we have uploaded our own data, how is the trained model going to affect the whole thing? Perhaps, the more specific question is, when we send a request, what does it use, own data? own data + fine-tuned model?

gopikl profile image
Gopi Krishna

Is there a way to incrementally train the model, or remove training data?