DEV Community

Cover image for How to run LLM modal locally with Hugging-Face πŸ€—
Roshan Sanjeewa Wijesena
Roshan Sanjeewa Wijesena

Posted on

How to run LLM modal locally with Hugging-Face πŸ€—

Welcome Back - In this topic i would like to talk about how to download and run any LLM modal into your local machine/environment.

We again use hugging-face here πŸ€—. You would need hugging-face API key first.

Run below code to download google/flan-t5-large in to your local machine, it will take a while and you will see the progress in your jupyter notebook.

from langchain.llms import HuggingFacePipeline
import torch
from transformers import pipeline,AutoTokenizer,AutoModelForCausalLM,AutoModelForSeq2SeqLM
model_id= "google/flan-t5-large"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSeq2SeqLM.from_pretrained(model_id)

pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer,max_length=128)
local_llm = HuggingFacePipeline(pipeline=pipe)

prompt = PromptTemplate(
    input_variables=["name"],
    template="Can you tell me about footballer {name}",
)
chain = LLMChain(prompt=prompt, llm=local_llm)
chain.run("messi")

Enter fullscreen mode Exit fullscreen mode

Top comments (2)

Collapse
 
jayanath_liyanage_3991f20 profile image
Jayanath Liyanage

Great work πŸ‘

Collapse
 
ouyangzhigang profile image
Voyagergle

let me try