By now, you've probably already seen the onslaught of large language models (LLMs), whether closed source (e.g Open AI), open source (e.g. LLamA, HuggingChat) or maybe you're hosting one yourself (💰💰) and the language skills they possess.
In this post I want to talk about giving your LLMs "skills". When I talk about skills, I mean more abilities that a "human" might be able to do. In particular, I'm going to focus on the ability to focus on a very specific domain, reasoning and the ability to take an action and memory.
Why?
Well, these "skills" are going to start becoming the basis of any LLM application you want to build. Purely prompting an LLM via an API or UI can only take you so far. A general LLM isn't trained or finetuned on your specific data, it doesn't have access to the most recent data/events/news and it doesn't have a memory by default.
Okay so, the application I'm going to focus on is a chatbot for Stoicism for the modern age. While Stoicism is great, it's ancient and I want actionable, modern day advice BASED on Stocism. Like so: https://www.linkedin.com/feed/update/urn:li:activity:7056228587254755328/
While models like GPT-4 already have knowledge on Stoicism, I want my model to narrow down on particular practitioners. I don't want it be basing it's Stoic advice on all of the internet/freely available data. So, enter: Weaviate DB, and two of my favourite books: Meditations by Marcus Aurelius and Letters from a Stoic written by Seneca.
And the next thing I mentioned was "actionable, modern day"...so where would I as human being get any information really? Google. And that's exactly what I'm going to give my LLM the ability to do.
Now, as a chatbot, it also needs to have memory, I need it to be able to remember what we were talking about? There are a few ways to that and in this case, we're going to use the simplest form, just via the prompt.
And finally, and my favourite: the ability to reason. As a chatbot that gives modern day advice based on Stoicism, it needs to be able to not just answer a question but logically think about how to get to the answer. For example: the user wants Stoic advice in a modern context...so I should search the DB and then how do I make it modern? I should Google it, what do I need to Google?
So, with all of that in mind, this is roughly what're going to build:
We're using two tools: Google search (via Serper API) and a Vector DB (Weaviate), which contains the two books mentioned earlier.
On top of that we're using the concept of "Agent" which is a wrapper around a model you'll input the query in here, and get outputted an action to take.
The AgentExecutor, written in Python is responsible for actually executing the action (i.e Google search or search DB).
And with this kind of set up, here's an example of it's thoughts and actions:
Here I ask it how to get motivation for the gym, first it searches for motivation via Stoicism then understands and reasonss and then decides to Google for building habits aligning with your values.
Show me the code!!
You can find the full code here: https://gist.github.com/aarushik93/2a9c9c050e78b34ff2a701bf5c6faf31. Below I'll walk you through the most relevant bits!
Note: We're not going through ingesting data into a VectorDB in this post. I will show you in another post or you can check out the weaviate documentation: https://weaviate.io/developers/weaviate
client = weaviate.Client(
url=WEAVIATE_URL,
additional_headers={
'X-OpenAI-Api-Key': OPENAI_API_KEY
}
)
vectorstore = Weaviate(client, "Paragraph", "content")
ret = vectorstore.as_retriever()
AI = OpenAI(temperature=0.2, openai_api_key=OPENAI_API_KEY)
# Set up the question-answering system
qa = RetrievalQA.from_chain_type(
llm=AI,
chain_type="stuff",
retriever=ret,
)
search = GoogleSerperAPIWrapper(serper_api_key=SERPER_API_KEY)
# Set up the conversational agent
tools = [
Tool(
name="Stoic System",
func=qa.run,
description="Useful for getting information rooted in Stoicism. Ask questions based on themes, life issues and feelings ",
),
Tool(
name="Search",
func=search.run,
description="Useful for when you need to get current, up to date answers."
)
]
In this block we're doing a few things:
- Setting up the Weaviate and SerperAPI clients
- Setting up the RetrievalQA chain, which is going to allow us to query the DB using the question we/AgentExecutor provides it AND stop the LLM from hallucianting or making up answers.
The prompt template for that is:
"""Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
{context}
Question: {question}
Helpful Answer:"""
- And finally, we're setting up the Tools for the AgentExecutor to actually use: Take note of the descriptions, these are what help the LLM determine which tool it tells the Agent it wants to use.
Okay, next up:
prefix = """You are a Stoic giving people advice using Stoicism, based on the context and memory available.
Your answers should be directed at the human, say "you".
Add specific examples, relevant in 2023, to illustrate the meaning of the answer.
You can use these two tools:"""
suffix = """Begin!"
Chat History:
{chat_history}
Latest Question: {input}
{agent_scratchpad}"""
## agent
prompt = ZeroShotAgent.create_prompt(
tools,
prefix=prefix,
suffix=suffix,
input_variables=["input", "chat_history", "agent_scratchpad"],
)
This is the actual prompt we're using to get the LLM to act like a Stoic.
First, take note, we tell it has access to two tools...we didn't specify the tools yet but that's okay, this is a template and the Agent wrapper will actually craft the prompt, with the tools and all.
Next, take notice we're also creating this prompt as a template for the chat history, with a latest question, that way the LLM has a so called memory.
Keep in mind with this approach the memory is limited by the number of tokens we can actually send with the context. And that depends on the model you're using.
Also take not of this so called "agent scratchpad": which is where the model can do it's "thinking".
The next part to take note of:
agent_chain = AgentExecutor.from_agent_and_tools(
agent=agent, tools=tools, verbose=True, memory=st.session_state.memory
)
This is the actual executor, when the LLM outputs a decision, this is the thing that executes that decision.
And there you have it. Your Stoic Bot.
Top comments (1)
A chatbot for stoicism in the modern age is such a fun and interesting angle. Cool post here — really appreciate ya sharing it!