DEV Community

Cover image for Unlock the Power of Meta Llama LLM: Easy Guide to Hosting in Your Local Dev Environment
Bradston Henry
Bradston Henry

Posted on

Unlock the Power of Meta Llama LLM: Easy Guide to Hosting in Your Local Dev Environment

In my previous post I described how AI tools have revolutionized my Development workflow. Toward the end of the blog, I shared the step-by-step on how to host the Meta Llama LLM on your local machine.

Because it was alllll the way at the end of the blog, I figured I'd make a blog that showed those steps separately as many people have been finding value from it.

Check it out!

How do I Host my OWN local AI Chat Agent

So let me walk you through the few simple steps I used to start using Llama (and other available Open-Source AI Models):

Step 1: Download and Install Ollama on Your Local Machine

Navigate to the Official Ollama site and quickly download the Ollama for your Windows, Mac, or Linux Machine.

Ollama is light-weight tool that allows you to run Large Language Models on your local machine (e.g Llama 3.2, Mistral, and Gemma 2). Once installed, you are able to run and customize models on your local machine.

Image description

Step 2: Run the Ollama Setup Wizard

Once downloaded, open the Ollama Setup executable and navigate through the wizard to install the Ollama tools on your machine.

As you might expect, Ollama's Setup Wizard will quickly add all necessary file to run the Ollama tool

Image description

Step 3: Verify Ollama Installation

If Ollama was installed correctly, you should now be able to access it from your standard command prompt.

Sometimes, when installing Ollama it will automatically open a command prompt window, if not you will need to open it yourself.

On a Windows machine, search for "cmd" and you should be able to open Command Prompt.

Once open, verify the installation by typing the line below and pressing Enter/Return:

ollama
Enter fullscreen mode Exit fullscreen mode

You should see an output like this:

Usage:
  ollama [flags]
  ollama [command]

Available Commands:
  serve       Start ollama
  create      Create a model from a Modelfile
  show        Show information for a model
  run         Run a model
  stop        Stop a running model
  pull        Pull a model from a registry
  push        Push a model to a registry
  list        List models
  ps          List running models
  cp          Copy a model
  rm          Remove a model
  help        Help about any command

Flags:
  -h, --help      help for ollama
  -v, --version   Show version information

Use "ollama [command] --help" for more information about a command.
Enter fullscreen mode Exit fullscreen mode

Image description

Step 4: Install and Run a Large Language Model

With Ollama installed, we are only one step away from having our own local LLM on our machine.

To get started, let's install Meta's Llama3.2 model.

In our command prompt window, type:

ollama run llama3.2
Enter fullscreen mode Exit fullscreen mode

This will do two things:

  1. Install Llama3.2 if it is not currently on our machine
  2. Run the Llama3.2 Model

Press return/enter and you should see something like this (if Llama3.2 is not installed):

pulling manifest
pulling dde5aa3fc5ff...  26% ▕████████████                                            ▏ 532 MB/2.0 GB   32 MB/s     48s
Enter fullscreen mode Exit fullscreen mode

Image description

Ollama will now download and install the Llama3.2 model on your machine and once complete, run the llama3.2 model for you to interact with.

In your command line window, you should see something like this:

pulling manifest
pulling dde5aa3fc5ff... 100% ▕████████████████████████████████████████████████████████▏ 2.0 GB
pulling 966de95ca8a6... 100% ▕████████████████████████████████████████████████████████▏ 1.4 KB
pulling fcc5a6bec9da... 100% ▕████████████████████████████████████████████████████████▏ 7.7 KB
pulling a70ff7e570d9... 100% ▕████████████████████████████████████████████████████████▏ 6.0 KB
pulling 56bb8bd477a5... 100% ▕████████████████████████████████████████████████████████▏   96 B
pulling 34bb5ab01051... 100% ▕████████████████████████████████████████████████████████▏  561 B
verifying sha256 digest
writing manifest
success
>>> Send a message (/? for help)
Enter fullscreen mode Exit fullscreen mode

Image description

And just like that, you now have a LLM on your local machine!!

Also, If you would like to install any other model available through Ollama, you would use the same command:

ollama run MODEL_NAME
Enter fullscreen mode Exit fullscreen mode

Step 5: Test out your Large Language Model

Now that Llama3.2 is on our machine, let's test it out.

Let's ask it a simple question:

Hey Llama! Could you tell me a little bit about yourself?
Enter fullscreen mode Exit fullscreen mode

You should then get a response similar to this:

Hey Llama! Could you tell me a little bit about yourself?
I'm happy to chat with you. I'm an artificial intelligence model known as Llama, which stands for "Large Language
Model Meta AI." My primary function is to process and generate human-like text based on the input I receive.

I was trained on a massive dataset of text from various sources, including books, articles, and online
conversations. This training allows me to understand and respond to a wide range of questions, topics, and styles.

Some key features of my abilities include:

.....
Enter fullscreen mode Exit fullscreen mode

See! Wasn't that pretty easy!

Wrap Up

And just like that, Your Llama LLM should be up and running with ease on your local machine.

If you are interested in checking out Ollamas Docs you can find them here

If you are interested in checking out a GUI that helps you utilize Ollama even easier, check out Open Web UI. Hope to have a blog up in the near future with a step-by-step on how to set that up as well.

Until then, enjoy you own personal Llama personal assistant!


Photo Credits(Order of Appearance):

Cover Photo with Google Gemini
Prompt:
A llama with a confident smirk on its face, wearing a blue cape emblazoned with a infinity symbol, flies in against a sunset, ready to save the day in this cartoon style illustration.


Follow me on my Socials:
https://linktr.ee/bradstondev

Top comments (10)

Collapse
 
anton_maryukhnenko_1ef094 profile image
Anton Maryukhnenko

You forgot to mention hardware requirements for different models.

Collapse
 
leob profile image
leob • Edited

That's also what I'm curious about - maybe if I want to do this I first need a big hardware upgrade ... I think until then I'll have to pass on this.

Collapse
 
bradstondev profile image
Bradston Henry

To @anton_maryukhnenko_1ef094 's point, I need to update this blog to mention the general hardware requirements for Llama at least. I think that would be helpful to others..

@leob I took a chance on my old dying comp with Ollama and Llama3.2 and it ended up working. You should have heard the fan though. haha. It just so happened I NEEDED an upgrade so my new/current comp is more capable and has been fairing pretty well. If I do end up running into any hiccups, I will definitely try and share.

Thread Thread
 
leob profile image
leob

Thanks ... I'd probably end up wanting a SEPARATE "box" (hardware) dedicated to it and optimized for it (with a GPU and all that), so as not to "burden" my main workstation - then do the "queries" over a fast local network!

Collapse
 
hernanruscica profile image
cesar hernan ruscica

Hi, excellent post, I wonder why don't try another more friendly interface like LLM studio.
Thanks!

Collapse
 
bradstondev profile image
Bradston Henry

I personally have upgraded to using Open Web UI. I'm actually in the process of writing up a blog on the steps to getting that working on your local machine. :-)

It is SOOOO much better than using the command line interface but cmd interface was a good start for me when I was first experimenting with local LLMs.

Haven't tried LLM studio but I'm going to look into it. How do you like LLM studio?

Collapse
 
leob profile image
leob

Great post ... just curious: why would I want this, instead of using ChatGPT or other cloud-hosted/online AI tools?

Collapse
 
hernanruscica profile image
cesar hernan ruscica

No censorship, no limits for questions, and the most important thing, privacy!

Collapse
 
leob profile image
leob

Makes sense - it's just that the hardware requirements might be a bit of a concern ...

Thread Thread
 
bradstondev profile image
Bradston Henry • Edited

Def privacy but also I use it when developing applications and can use the Ollama python library in my local applications to directly access the LLMs I desire. Knowing I have no limitations on how many requests I can make is very nice.