DEV Community

Cover image for How to Build Your GPT Chatbot with .NET
feiyun0112
feiyun0112

Posted on

How to Build Your GPT Chatbot with .NET

Creating a chatbot in .NET is an exciting process that combines a powerful backend framework with a flexible frontend interface. Gradio.Net provides an ideal platform for building and deploying interactive chatbots that can have real-time conversations with users. This article will guide you on how to create a basic chatbot in .NET using Gradio.Net.

Gradio.Net

Gradio.Net is an open source .NET library. Gradio.Net provides a convenient way for developers to share their models. It provides a simple, user-friendly web interface to share LLM models with others anytime, anywhere. The unique selling point of Gradio.Net is that it does not require developers to write Javascript, HTML, or CSS to build a web interface. It provides some flexibility to add frontend code for customization. This makes developers with little frontend knowledge ideal for sharing their models with team members or audiences.

In order to build a web application for an LLM model, you need to be familiar with the basic layout components of Gradio.Net.

Blocks layout component

Gradio.Net provides Blocks layout components that give you the flexibility to place components anywhere on the screen, as well as event handlers for interactive user experiences.

Here is a simple example of Blocks:

using (var blocks = gr.Blocks())
{
    var name = gr.Textbox(label: "Name");
    var output = gr.Textbox(label: "Output Box");
    var btn = gr.Button("Greet");
    btn.Click(fn: async (input)=> gr.Output($"Hello {input.Data[0]}!"), inputs: [name], outputs: [output]);

    App.Launch(blocks);
}
Enter fullscreen mode Exit fullscreen mode

Gradio.Net Blocks

  • The “using” clause is required to define Blocks.
  • The components in the using clause will be added to the web page.
  • The components will be rendered vertically in the order defined.
  • Use App.Launch to launch the app

Azure OpenAI API

Before building the chat interface, we need to deploy the LLM model, and here we use the OpenAI service provided by Azure.

The Azure OpenAI service provides developers with a series of REST APIs, through which you can easily access OpenAI’s cutting-edge language models, such as GPT-4, GPT-3.5-Turbo, and embedded models. These APIs are jointly developed by Azure OpenAI and OpenAI, ensuring high compatibility with OpenAI services.

First, you need to register and log in to your account on the Azure official website.

Azure OpenAI

Then, search for OpenAI in the Azure portal and go to its service page, and click the “Create” button to start creating the service.

Follow the on-screen instructions to fill in the necessary information, and click “Create” when you are finished, and your Azure OpenAI service will be successfully created.

For security and convenience reasons, it is recommended that you configure this information into environment variables, so that you can avoid repeatedly entering this information in the code.

Environment variables

Build a chatbot

First, you need to set up your .NET development environment. This usually involves installing Visual Studio or other IDEs, as well as the .NET SDK. Once your development environment is ready, you can create a new Asp.NET Core project to lay the foundation for your chatbot.

Next, you need to install the Gradio.Net.AspNetCore library. This can be easily done through the NuGet package manager. After installing Gradio.Net in your project, you can start building the core functionality of the chatbot.

Application front end

Gradio.Net has a pre-built chatbot component that can render a chat interface. We will also add a text box component that takes text input from the end user. This is all our front-end code:

using (var blocks = gr.Blocks())
{
    var chatbot = gr.Chatbot();
    var txt = gr.Textbox(placeholder:"Enter text and press enter");

    App.Launch(blocks);
}
Enter fullscreen mode Exit fullscreen mode

Application backend

We have successfully built the front-end of the web application. Now, the remaining part is to make it operational. We need to define a function that returns a response:

static async IAsyncEnumerable<Output> GenerateResponse(Input input, Kernel kernel)
{
    var userPrompt = Textbox.Payload(input.Data[0]);
    var history = Chatbot.Payload(input.Data[1]);

    history.Add(new ChatbotMessagePair(new ChatMessage { TextMessage = userPrompt }, new ChatMessage { TextMessage = "" }));

    await foreach (var responseMessage in kernel.InvokePromptStreamingAsync<string>(userPrompt))
    {
        if (!string.IsNullOrEmpty(responseMessage))
        {
            history.Last().AiMessage.TextMessage += responseMessage;
            yield return gr.Output("", history);
        }
        await Task.Delay(50);
    }
}
Enter fullscreen mode Exit fullscreen mode

Here, **userPormpt **represents the user’s input, and **history **is the conversation history saved inside the Chatbot component.

Next, we need to handle the enter event of the text box Submit to trigger this function:

txt.Submit(streamingFn: (input) => GenerateResponse(input, kernel), inputs: new Gradio.Net.Component[] { txt, chatbot }, outputs: new Gradio.Net.Component[] { txt, chatbot });
Enter fullscreen mode Exit fullscreen mode

It is worth noting that we use the streamingFn parameter to handle the Submit event, which can start the streaming output mode of the component, thereby achieving a continuous typing effect similar to ChatGPT.

Chat with LLM

In the GenerateResponse function, we used Semantic Kernel (SK for short). SK is an open source software development kit that enables developers to quickly and easily incorporate the most advanced LLM technology into applications.

We need to add a NuGet package reference to Microsoft.SemanticKernel in the project and create a Kernel object instance.

string endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT");
string deploymentName = Environment.GetEnvironmentVariable("AZURE_OPENAI_GPT_NAME");
string apiKey = Environment.GetEnvironmentVariable("AZURE_OPENAI_API_KEY");

var kernel = Kernel.CreateBuilder()
    .AddAzureOpenAIChatCompletion(
        deploymentName: deploymentName,
        endpoint: endpoint,
        apiKey: apiKey)
    .Build();
Enter fullscreen mode Exit fullscreen mode

SK provides the “InvokePromptStreamingAsync” method, which is a dedicated function that allows you to specify input information (called “Prompt prompt” in LLM) and easily get results from the AI ​​model:

kernel.InvokePromptStreamingAsync<string>(userPrompt)
Enter fullscreen mode Exit fullscreen mode

When the user submits text, GenerateResponse takes the prompt and chatbot object as input, and the loop is responsible for sequentially presenting the text in the chatbot when receiving the text to improve the user experience.

Chatbot

Conclusion

In this article, we have explored in detail how to build a fully functional chatbot using .NET technology.

Through the components of Gradio.Net (https://github.com/feiyun0112/Gradio.Net), we can quickly build a chat interface, and through the streaming output mode, we can achieve a dynamic interactive effect similar to ChatGPT.

All source code

using Gradio.Net;
using Microsoft.SemanticKernel;

string endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT");
string deploymentName = Environment.GetEnvironmentVariable("AZURE_OPENAI_GPT_NAME");
string apiKey = Environment.GetEnvironmentVariable("AZURE_OPENAI_API_KEY");

var kernel = Kernel.CreateBuilder()
    .AddAzureOpenAIChatCompletion(
        deploymentName: deploymentName,
        endpoint: endpoint,
        apiKey: apiKey)
    .Build();

App.Launch(await CreateBlocks(kernel));

static async Task<Blocks> CreateBlocks(Kernel kernel)
{
    using (var blocks = gr.Blocks())
    {
        var chatbot = gr.Chatbot();

        var txt = gr.Textbox(showLabel: false,
                placeholder: "Enter text and press enter"
            );

        txt.Submit(streamingFn: (input) => GenerateResponse(input, kernel), inputs: new Gradio.Net.Component[] { txt, chatbot }, outputs: new Gradio.Net.Component[] { txt, chatbot });
        return blocks;
    }
}

static async IAsyncEnumerable<Output> GenerateResponse(Input input, Kernel kernel)
{
    var userPrompt = Textbox.Payload(input.Data[0]);
    var history = Chatbot.Payload(input.Data[1]);

    history.Add(new ChatbotMessagePair(new ChatMessage { TextMessage = userPrompt }, new ChatMessage { TextMessage = "" }));

    await foreach (var responseMessage in kernel.InvokePromptStreamingAsync<string>(userPrompt))
    {
        if (!string.IsNullOrEmpty(responseMessage))
        {
            history.Last().AiMessage.TextMessage += responseMessage;

            yield return gr.Output("", history);
        }
        await Task.Delay(50);
    }
}
Enter fullscreen mode Exit fullscreen mode

Top comments (0)