Research-backed techniques to optimize prompts and few-shot learning
AI assistants like chatbots and voice assistants are becoming more capable every day. However, even advanced models like Claude can benefit from careful prompt engineering to get the most out of their skills.
In this post, I’ll share prompt engineering techniques you can use to enhance your AI assistant’s performance on tasks like:
Recalling information from long documents
Answering ambiguous questions
Avoiding biased word choices
Getting more creative and varied responses
With the right prompts, you can level up your AI assistant to be more helpful, knowledgeable, and human-like. I’ll provide examples and code samples for prompt techniques like using a , providing contextual examples, and structuring instructions clearly.
In summary, while tested on MCQs (multiple choice questions), these prompt engineering best practices can serve as guidelines for optimizing many different types of prompts and outputs with an AI assistant. The core principles apply broadly. Let me know if any part of my explanation needs clarification!
Whether you’re an AI developer or a casual user of AI chatbots, you’ll learn tangible strategies to improve your prompts. I will make it simple, I promise.
When asking Claude a question that requires recalling information from a long document, what can you do?
Note: before reading — it is not strictly necessary to explicitly label sections like “[Few Shot Examples]”, “[Prompt]”, and “[Passage]” or any other […] in the prompt example templates provided. The formatting is primarily for humans to read!
-->Use a <scratchpad> to extract key quotes from the document that are relevant to answering the question. This improves recall performance.
Scratch? what? Okay, you can stop scratching your head. This means <scratchpad> acts like a memory aid for Claude, created strategically by the user to call out the most relevant bits of the long context for answering the specific question. This technique improved Claude’s performance in recalling information from long documents in the prompt engineering experiments.
The workflow would be:
User provides a long document context
User identifies key quotes from the document that are relevant to answering the upcoming question
User inserts those quotes into a <scratchpad> block in the prompt
User then asks their question at the end of the prompt
When Claude processes the prompt, it can refer back to the <scratchpad> content extracted by the user to better recall where in the document context the information to answer the question is located.
Prompt Example Template:
Below is a 10,000 word document about the history of machine learning.
[insert long document here]
<scratchpad> Extract key quotes from the document that provide details about when deep learning started gaining popularity:
“In 2012, a deep learning model developed by Geoff Hinton and his students won the annual ImageNet image classification competition by a wide margin.”
“Between 2012 and 2015, deep learning began to be used for many more applications beyond image recognition, including natural language processing, audio recognition, and recommender systems.”
“In 2016, AlphaGo defeated the world champion in the game Go using deep neural networks, representing a major milestone in AI.” </scratchpad>
Question: When did deep learning start to become popular for applications beyond image recognition?_
The <scratchpad> extracts three relevant quotes from the document that provide context to answer the question of when deep learning expanded beyond image classification. This focuses Claude’s attention on the key information.
-->Provide several examples of other questions and answers from different parts of the document, to give Claude more context on the style of question you are asking.
The workflow:
- Start with a long document or passage as the context.
- Manually identify 3–5 short excerpts from different parts of the document that contain key details.
- Write the question and answer pairs based on each excerpt, demonstrating how to query details from the passage.
- Insert these question/answer examples sequentially in the prompt before the actual question you want Claude to answer.
- Ask the target question at the end that you want Claude to respond to based on the full document.
- Claude processes the entire prompt. The examples focus its attention on looking for similar types of details in the passage.
- Claude applies this priming when reaching the target question, and leverages the full context to locate the relevant detail needed for its answer.
Prompt example template:
Below is a transcript of a speech given by the CEO of Company XYZ:
[Insert long speech transcript here]
[Examples:] Q: What does the CEO say is the company’s top priority for the year? A: The CEO states that the top priority is expanding internationally and entering new markets.
Q: How much does the CEO project revenue will increase next quarter? A: The CEO projects a 15% revenue increase next quarter.
Q: What competitive advantage does the CEO highlight in the speech? A: The CEO highlights their investment in R&D and proprietary technologies as a key competitive advantage.
Question: How many new products does the CEO announce they will be launching this year?
This provides Claude with three examples extracted from elsewhere in the long document, demonstrating the style of subjective questions that require picking out key details from the passage. The examples prime Claude on what type of information is being sought, improving its ability to recall the relevant detail from the document to answer the actual question.
-->Put the question and any instructions (like using the ) at the end of the prompt, so they are closest to where Claude needs to generate the answer.
The key is to position the most critical information (question + instructions) last, as close as possible to where Claude will produce its answer. This optimization aims to maximize Claude’s attention and recall when it matters most.
The workflow:
Provide the full context passage that the question will be about.
Include any examples or other directions earlier in the prompt.
Identify a key quote from the passage that provides useful context for answering the question.
Add the key quote within a block.
Craft a specific question that requires deducing the answer from the full passage, not just the quote.
Place the question and direction together at the very end of the prompt.
Claude will process the full prompt, prime its attention with the <scratchpad>, and then need to recall details from the entire passage to answer the question at the end.
Claude generates its response based on the full context.
The prompt example template:
[Context:] [Long passage describing a historical event]
[Examples:]
Q: When did this event occur? A: The passage states the event occurred in 1852.
Q: Where did the event take place?
A: According to the passage, the event took place in Boston, Massachusetts.
<scratchpad> “The passage mentions that there were over 200 people in attendance at the historical event.” </scratchpad>
Question: Approximately how many people were present at the event being described?
-->If the document context is very long (95,000+ tokens ~ 475,000 words, ~950 A4 pages), performance tends to dip slightly for information at the very end. So consider positioning key info earlier in the document if possible.
No applicable workflow or prompt example. Just organize the information in the document context.
-->When generating questions with Claude via Few Shot Learning, use a template that encourages Claude to specify details like which specific passage a question is about.
This avoids ambiguous phrases like “this document” when stitching passages together.
Few shot examples are a technique used in prompt engineering where you provide an AI assistant with a few examples demonstrating the task you want it to complete. This “primes” the model before asking it to generate new original content.
Specifically for question generation:
A few-shot example would show 2–3 sample questions about a passage, with the correct answers provided.
These examples demonstrate to the AI assistant the style and format of questions expected.The prompt then instructs the assistant to generate a new question in that same style about a new passage.
By studying the few shot examples first, the AI can mimic that question/answer pattern for its new output.
Include guidelines in the Few Shot Learning prompt to prevent Claude from making the correct answer obviously different or more detailed than the wrong answers.
In the prompt itself, you would simply provide the examples directly, without labeling them as “few shot examples”.
Prompt example template:
[Few Shot Examples]
Q: What year did the Civil War begin, according to the passage about the origins of the Civil War? A: The passage about the origins of the Civil War states that the war began in 1861.
Q: How long did the battle last, as described in the passage about the Battle of Gettysburg? A: The passage about the Battle of Gettysburg indicates the battle lasted 3 days.
[Prompt:] Please generate a multiple choice question about the details contained in the passage below about the Lincoln-Douglas debates. When referencing the passage, specify “the passage about the Lincoln-Douglas debates” rather than using vague phrases like “this passage”. Provide 3 incorrect answers and 1 correct answer to the question.
[Passage:] [text about Lincoln-Douglas debates]
Include guidelines in the Few Shot Learning prompt to prevent Claude from making the correct answer obviously different or more detailed than the wrong answers
When using few-shot learning to have Claude generate multiple-choice questions, we want the answers to be fair and balanced. We don’t want Claude to make the right answer obviously different from the wrong one.
For example, we don’t want:
The right answer to be 5 sentences long and very detailed, while the wrong choices are only 1 sentence.
The right answer to sound more plausible or contain giveaway phrases like “this is correct”.
Instead, we want:
All answer choices to be similar length, ideally 1–2 sentences.
The right answer to seem equally possible as the wrong ones based on the passage.
No giveaway clues that reveal the answer.
To do this, we can provide examples in the few-shot prompt showing balanced answers. And give Claude instructions like:
Make all answers 1–2 sentences long.
Don’t include clues that give away the right answer.
All answers should seem possible based on the passage.
This way, picking the right answer requires deeper understanding, not just spotting surface clues.
TLDR:
- Prompts are very important for AI assistants like Claude. Good prompts help Claude be smarter.
- There are special techniques you can use in prompts to make Claude better at remembering and answering questions.
- Some examples of good techniques are:
- Using a to remind Claude of important details
- Giving Claude examples of other questions and answers
- Putting the question at the end of the prompt
- Moving key info earlier in long passages
- Asking Claude for balanced multiple-choice answers
- Prompt engineering takes practice but can really improve Claude’s skills!
- With the right prompts, Claude can be an even more helpful and knowledgeable assistant.
- In summary, prompt engineering lets us level up our AI like Claude. There are proven techniques to make prompts optimized for different goals. Writing effective prompts is a useful skill for getting the most out of AI.
Notes:
Original article:
FREE READ LINK: https://medium.com/the-abcs-of-ai/use-prompt-engineering-to-level-up-your-ai-assistant-claude-b57cac293dcc?sk=83c4c7d0b00549b963d5b48c623ba4cf
I'm on a quest to make AI less scary, one Medium post at a time. Join me on this totally uplifting journey by following my publication and subscribe to the weekly newsletter.
ABCs of AI
https://medium.com/the-abcs-of-ai
How did you like this post?
You can reach me at contact@abcsofai.info. I welcome your feedback and suggestions. And,
Follow me on the socials:
Linkedin: www.linkedin.com/in/tfmissgorgeoustech
Facebook: facebook/ABCsofAI
Youtube: @ABCs-of-AI
X: @GorgeousTech
Thank you.
Top comments (0)