DEV Community

Cover image for πŸ†“ Local & Open Source AI: a kind ollama & LlamaIndex intro
adriens
adriens

Posted on

πŸ†“ Local & Open Source AI: a kind ollama & LlamaIndex intro

❔ About

Sometimes, you may need a convenient yet powerful way to run many LLMs locally with:

  • Only CPU ( i5 like)
  • Little RAM (eg <= 8Go)
  • Being able to plug third party frameworks (Langchain, LlamaIndex) so you can build complex projects
  • Ease of use (few lines of code, powerful results)

πŸ‘‰ This is all what this post is about.

🎯 What you'll learn

In this short demo, you'll see how to:

  • Run on Kaggle (CPU)
  • Use ollama to run open source models
  • Play with a first LlamaIndex example

πŸ’‘ Benefits & opportunities

Get rid of weekly GPU usage limits on free plan:

Image description

With this CPU approach, you are then able to schedule AI based workflow for free (as long as it does not exceed the 12h window limit).

🍿 Demo

Enough teasing, let's jump in the demo:

πŸ“œ Notebook

Image description

πŸ”­ Further, stronger

To go further (48GB of RAM required, as well as GPU ), a full example around mixtral, see Running Mixtral 8x7 locally with LlamaIndex.

Top comments (10)

Collapse
 
adriens profile image
adriens

Collapse
 
adriens profile image
adriens

Collapse
 
adriens profile image
adriens

Collapse
 
adriens profile image
adriens

Collapse
 
adriens profile image
adriens

Collapse
 
adriens profile image
adriens

Collapse
 
adriens profile image
adriens

Collapse
 
adriens profile image
adriens

Collapse
 
adriens profile image
adriens

Collapse
 
adriens profile image
adriens