Chaincrafter is a small alternative to LangChain, MiniChain and other libraries for creating prompts and chains of prompts.
Install it with pip install chaincrafter
Currently it supports:
- the OpenAI API
- async calls to the OpenAI API
- experiments to test combinations of model parameters and check the results and outputs
Click here to see the documentation
Click here for the Chaincrafter github repo
It's a work-in-progress, and I'm planning on adding support for local LLMs such as llama.cpp, and other APIs such as CerebriumAI, along with support for function calls and agents.
How to create a chain of prompts
It's easy! You create multiple Prompt
objects with a set of input variables, and then pass them to a Chain
. As part of the chain you have to name the output variable which can then be used as an input variable.
Here's an example:
from chaincrafter import Chain, Prompt
from chaincrafter.models import OpenAiChat
chat_model = OpenAiChat(
temperature=0.9,
model_name="gpt-3.5-turbo",
presence_penalty=0.1,
frequency_penalty=0.2,
)
system_prompt = Prompt("You are a helpful assistant who responds to questions about the world")
hello_prompt = Prompt("Hello, what is the capital of France? Answer only with the city name.")
followup_prompt = Prompt("{city} sounds like a nice place to visit. What is the population of {city}?")
chain = Chain(
system_prompt,
(hello_prompt, "city"),
(followup_prompt, "followup_response"),
)
messages = chain.run(chat_model)
for message in messages:
print(f"{message['role']}: {message['content']}")
In the example, we are creating three prompts: system_prompt
, hello_prompt
, and followup_prompt
.
Then we are creating a chain from the hello_prompt
to the followup_prompt
. Notice how the variable names are used in the prompt template string.
Afterwards we print out the whole conversation.
From this point you can extract all of the assistant's responses and then parse them further.
Comparing GPT-3.5 and GPT-4 and various temperatures using experiments
Usually, tutorials for using LLMs and OpenAI GPT-4 models show how to use the API and create the prompts.
πLet's do something more interesting! Let's see how the output is different when the temperature
is 0.7
or 1.5
and compare and contrast the output from GPT-3.5-turbo and GPT-4.
Here's the code:
from chaincrafter import Chain, Prompt
from chaincrafter.experiments import OpenAiChatExperiment
system_prompt = Prompt("You are a helpful assistant who responds to questions about the world")
hello_prompt = Prompt("Hello, what is the capital of France? Answer only with the city name.")
followup_prompt = Prompt("{city} sounds like a nice place to visit. What is the population of {city}?")
chain = Chain(
system_prompt,
(hello_prompt, "city"),
(followup_prompt, "followup_response"),
)
experiment = OpenAiChatExperiment(
chain,
model_name=["gpt-4", "gpt-3.5-turbo"],
temperature=[0.7, 1.5],
presence_penalty=[0.1],
frequency_penalty=[0.2],
)
experiment.run()
Here's the output:
Run 0 at 2023-08-04 09:42:17 with model parameters: {'model_name': 'gpt-4', 'temperature': 0.7, 'presence_penalty': 0.1, 'frequency_penalty': 0.2, 'top_p': 1.0, 'n': 1, 'stream': False, 'stop': None, 'max_tokens': None, 'logit_bias': None}
system: You are a helpful assistant who responds to questions about the world
user: Hello, what is the capital of France? Answer only with the city name.
assistant: Paris
user: Paris sounds like a nice place to visit. What is the population of Paris?
assistant: As of 2021, the population of Paris is approximately 2.16 million people.
Run 0 at 2023-08-04 09:42:24 with model parameters: {'model_name': 'gpt-4', 'temperature': 1.5, 'presence_penalty': 0.1, 'frequency_penalty': 0.2, 'top_p': 1.0, 'n': 1, 'stream': False, 'stop': None, 'max_tokens': None, 'logit_bias': None}
system: You are a helpful assistant who responds to questions about the world
user: Hello, what is the capital of France? Answer only with the city name.
assistant: Paris
user: Paris sounds like a nice place to visit. What is the population of Paris?
assistant: As of the latest data from 2021, the population of Paris is approximately 2.14 million people in the city proper. Note that the population of the larger Paris Metropolitan area is considerably higher.
Run 0 at 2023-08-04 09:42:25 with model parameters: {'model_name': 'gpt-3.5-turbo', 'temperature': 0.7, 'presence_penalty': 0.1, 'frequency_penalty': 0.2, 'top_p': 1.0, 'n': 1, 'stream': False, 'stop': None, 'max_tokens': None, 'logit_bias': None}
system: You are a helpful assistant who responds to questions about the world
user: Hello, what is the capital of France? Answer only with the city name.
assistant: Paris
user: Paris sounds like a nice place to visit. What is the population of Paris?
assistant: The population of Paris is approximately 2.2 million people.
Run 0 at 2023-08-04 09:42:27 with model parameters: {'model_name': 'gpt-3.5-turbo', 'temperature': 1.5, 'presence_penalty': 0.1, 'frequency_penalty': 0.2, 'top_p': 1.0, 'n': 1, 'stream': False, 'stop': None, 'max_tokens': None, 'logit_bias': None}
system: You are a helpful assistant who responds to questions about the world
user: Hello, what is the capital of France? Answer only with the city name.
assistant: Paris
user: Paris sounds like a nice place to visit. What is the population of Paris?
assistant: As of 2021, the estimated population of Paris is over 2.1 million residents in the city proper area.
So here's how it looks for the last response in a table:
GPT-4 temperature = 0.7 |
GPT-4 temperature = 1.5 |
GPT-3.5-turbo temperature = 0.7 |
GPT-3.5-turbo temperature = 1.5 |
---|---|---|---|
As of 2021, the population of Paris is approximately 2.16 million people. | As of the latest data from 2021, the population of Paris is approximately 2.14 million people in the city proper. Note that the population of the larger Paris Metropolitan area is considerably higher. | The population of Paris is approximately 2.2 million people. | As of 2021, the estimated population of Paris is over 2.1 million residents in the city proper area. |
Using chains of prompts with experiments is a good way to check and fine-tune model parameters.
Top comments (0)