DEV Community

El Bruno for Microsoft Azure

Posted on • Originally published at elbruno.com on

🦙 Harnessing Local AI: Unleashing the Power of .NET Smart Components and Llama2

Hi!

.NET Smart Components are an amazing example of how to use AI to enhace the user experience in something as popular as a combobox.

.NET Smart Components also support the use of local LLMs, so in this post I’ll show how to configure these components to use a local Llama 2 inference server. The following image shows the Smart TextArea doing completions with a local server, in the right we can check the local server journal to check the local http requests to the server.

Live sample of smart componentes with ollama

Introduction to .NET Smart Components

.NET Smart Components are a groundbreaking addition to the .NET ecosystem, offering AI-powered UI controls that seamlessly integrate into your applications. These components are designed to enhance user productivity by providing intelligent features such as Smart Paste , Smart TextArea , and Smart ComboBox1.

Smart Paste simplifies data entry by automatically filling out forms using data from the user’s clipboard. Smart TextArea enhances the traditional textarea by providing autocomplete capabilities for sentences, URLs, and more. Lastly, Smart ComboBox improves the traditional combo box by offering suggestions based on semantic matching.

These components are currently available for Blazor, MVC, and Razor Pages with .NET 6 and later, and they represent an experiment in integrating AI directly into user interfaces1.

The Importance of Local LLMs like Llama2

Local Large Language Models (LLMs) like Llama2 offer significant advantages, particularly in terms of data privacy and security. In example: running LLMs locally allows organizations to process sensitive data without exposing it to external servers, ensuring compliance with data protection regulation.

Llama2 is an open-source model that provides robust performance across various tasks, including common-sense reasoning, mathematical abilities, and general knowledge. It supports a context length of 4096 tokens, which is double that of its predecessor, Llama1. This makes Llama2 an ideal choice for organizations looking to leverage AI while maintaining control over their data and infrastructure.

How to run .NET Smart Components with a Local Ollama Inference Server

In previous posts, I shared how to run a local Ollama Inference Server in Ubuntu (blog). Lucky us, you can also do this in Windows now.

And once you clone the main Smart Component repository, you only need to add a small change to run the samples locally.

  • Open the file [RepoSharedConfig.json]

sample vscode editing json file

  • Add the following configuration to use the local ollama model

{
  "SmartComponents": {

   // local demo with ollama self-hosted
    "SelfHosted": true,
    "DeploymentName": "llama2",
    "Endpoint": "http://localhost:11434"
  }
}

Enter fullscreen mode Exit fullscreen mode

And that’s it!, now you can run the either the Blazor or the MVC demos and they will use the local Ollama server to run the completions!

And hey, let’s keep an eye on the Smart Components, they are going to provide an amazing new user experience powered by AI!

Happy coding!

Greetings

El Bruno

More posts in my blog ElBruno.com.

More info in https://beacons.ai/elbruno


Top comments (0)