DEV Community

Cover image for Prompt Engineering - Part 1
Kushal
Kushal

Posted on

Prompt Engineering - Part 1

In this article, I will provide a comprehensive tutorial on prompt engineering, highlighting how to achieve the best and optimal results from Large Language Models (LLMs) such as OpenAI's ChatGPT.

Prompt engineering has gained significant popularity and widespread usage since the advent of LLMs, leading to a revolution in the field of Natural Language Processing (NLP). The beauty of prompt engineering lies in its versatility, allowing professionals from diverse backgrounds to effectively utilize it and maximize the potential of LLMs.

Basic working of ChatGPT

ChatGPT works with the concept of "assistant" and "user" roles to facilitate interactive conversations. The model operates in a back-and-forth manner, where the user provides input or messages, and the assistant responds accordingly.

Internal-working-LLM

The user role represents the individual engaging in the conversation. As a user, you can provide instructions, queries, or any text-based input to the model - which forms the prompt to the model.

The assistant role refers to the AI language model itself, which is designed to generate responses based on the user's input.
The model processes the conversation history, including both user and assistant messages, to generate a relevant and coherent response. It takes into account the context and information provided in the conversation history to generate more accurate and appropriate replies.

user-assistant-roles

The conversation typically starts with a system message that sets the behavior of the assistant, followed by alternating user and assistant messages. By maintaining a conversational context, the model can generate more consistent and context-aware responses.

To maintain the context, it is important to include the relevant conversation history when interacting with the model. This ensures that the model understands the ongoing conversation and can provide appropriate responses based on the given context.

context-history

Template for Prompt Usage

In this section, we will be writing a boiler-plate code that will form the basis for all our tasks.
To begin with, we need to generate a secret key from our OpenAI account.

!pip install openai

import openai 

# Generating Secret Key from your OpenAI Account
openai.api_key  = ('<OPEN-AI-SECRET-KEY>')


# Template function 
def get_completion(prompt, model="gpt-3.5-turbo", temperature = 0):
    messages = [{"role": "system", "content": "You are the assistant."},
                {"role": "user", "content": prompt}]
    response = openai.ChatCompletion.create(
        model=model,
        messages=messages,
        temperature=temperature, # this is the degree of randomness of the model's output
    )
    reply =  response.choices[0].message["content"]
    return reply

Enter fullscreen mode Exit fullscreen mode

The function has three inputs:

  • Prompt
  • Model (ChatGPT)
  • Temperature

We have covered the model and the role of the "user." Now, let's move on to the next two inputs: prompt and temperature.

Prompt refers to the text input provided to the model, which serves as a guiding instruction or query for generating a response.

Temperature, on the other hand, is a hyperparameter that plays a crucial role in determining the behavior of the model's output. Not to be confused with real world connotation, but this metric controls the level of randomness in the generated responses. By adjusting the temperature value, we can influence the model's output.

When the temperature is set to a higher value, the model produces more diverse and creative responses. Conversely, lower temperature values make the model more focused and deterministic, often resulting in more precise but potentially less varied outputs.

Choosing an appropriate temperature depends on the specific task and desired output.

Basics of Prompting

ChatGPT can perform a plethora of tasks namely Text Summarisation, Information Extraction, Question Answering, Text Classification, Sentiment Analysis, Code Generation to name a few.
Prompts can be designed to undertake single or multiple tasks depending on the use-case.

In this below example, we will be showcasing a basic text summarisation task peformed by the GPT.

prod_desc = """
3 MODES & ROTATABLE NOZZLE DESIGN- This portable oral irrigator comes with Normal, Soft and Pulse modes which are best for professional use. The 360° rotatable jet tip design allows easy cleaning helping prevent tooth decay, dental plaque, dental calculus, gingival bleeding and dental hypersensitivity.
DUAL WATERPROOF DESIGN- The IPX7 waterproof design is adopted both internally and externally to provide dual protection. The intelligent ANTI-LEAK design prevents leakage and allows the dental flosser to be used safely under the running water.
UPGRADED 300 ML LARGE CAPACITY WATER TANK- The new water tank is the largest capacity tank available and provides continuous flossing for an entire session. The removable full-opening design allows thorough cleaning thus preventing formation of bacteria and limescale deposits.
CORDLESS & QUALITY ASSURANCE- Cordless and lightweight power irrigator comes with a powerful battery that lasts upto 14 days on a single charge
RECHARGEABLE & QUALITY ASSURANCE- Cordless and lightweight power irrigator comes with a powerful battery that lasts upto 14 days on a single charge
"""


prompt = f"""
Your task is to generate a short summary of a product \
description from an ecommerce site. 

Summarize the description below, delimited by tags, in at most 50 words. 

Review: <tag>{prod_desc}</tag>

Output should be in JSON format with "summary" as key.
"""

response = get_completion(prompt)
print(response)

Enter fullscreen mode Exit fullscreen mode

Output :

{
"summary": "This portable oral irrigator has 3 modes and a rotatable nozzle design for easy cleaning. It has a dual waterproof design and a large 300ml water tank. It is cordless, rechargeable, and comes with a powerful battery that lasts up to 14 days on a single charge."
}

Some of the key notes to infer from the above code

  • When constructing a prompt, it's important to provide clear and specific instructions to guide the model's behavior. This helps ensure that the generated output aligns with your desired outcome.
  • Delimit input data: To differentiate the prompt's instruction from the actual input data, it's advisable to use delimiters. Delimiters can take various forms, such as quotation marks (" "), angle brackets (< >), HTML tags ( ), colons (:), or backticks ( ```). By using delimiters, you create a visual distinction that aids in parsing the prompt.
  • Request structured output: If your task requires a specific format or structure for the model's response, make sure to explicitly mention it in the prompt. By doing so we can easily utilize the output.

Here is a more detailed breakdown of the above prompt.

Elements of Prompts Breakdown
Instruction To generate a short summary of a product description.
Task Summarize the description
Task Constraints At most 50 words.
Input Data Delimiter <tag> </tag>
Output Format JSON

The key-elements of a prompt are : Instruction, Tasks, Constraints, Output Indicator, Input Data.

Multi-tasking Prompts

Consider a scenario where you are presented with a text and need to perform sentiment analysis, summarize the content, and extract topics from it.
In the pre-LLM era, accomplishing these tasks would typically involve training separate models for each task or relying on pre-trained models. However, with the advent of LLMs like ChatGPT, all of these tasks can now be efficiently executed using a single prompt.This eliminates the need for multiple specialized models and streamlines the workflow.



review = f""" Writing this review after using it for a couple of months now. It can take some time to get used to since the water jet is quite powerful. It might take you a couple of tries to get comfortable with some modes. Start with the teeth, get comfortable and then move on to the gums. Some folks may experience sensitivity. I experienced it for a day or so and then went away.
It effectively knocks off debris from between the teeth especially the hard to get like the fibrous ones. I haven't seen much difference in the tartar though. Hopefully, with time, it gets rid of it too.
There are 3 modes of usage: normal, soft and pulse. I started with soft then graduated to pulse and now use normal mode. For the ones who are not sure, soft mode is safe as it doesn't hit hard. Once you get used to the technique of holding and using the product, you could start experimenting with the other modes and choose the one that best suits you.
One time usage of the water full of tank should usually be sufficient if your teeth are relatively clean. If, however, you have hard to reach spaces with buildup etc. it might require a refill for a usage.
If you don't refill at all, one time full recharge of the battery in normal mode will last you 4 days with maximum strength of the water jet. If you refill it once, it'll last you 2 days after which the strength of the water jet reduces.
As for folks who are worried about the charging point getting wet, I accidentally used it once without the plug for the charging point and yet it worked fine and had no issues. Ideally keep the charging point covered with the plug provided with the product.
It has 2 jet heads (pink and blue) and hence the product can be used by 2 people as long as it's used hygienically. For charging, it comes with a USB cable without the adapter which shouldn't be an issue as your phone adapter should do the job.
I typically wash the product after every usage as the used water tends to run on the product during usage.
One issue I see is that the clasp for the water tank could break accidentally if not handled properly which will render the tank useless. So ensure to not keep it open unless you are filling the tank.
"""


prompt = f"""
Your task is to provide insights for the product review \
on a e-commerce website, which is delimited by \
triple colons.

Perform following tasks:
1. Identify the product.
2. Summarize the product review, in upto 50 words.
3. Analyze the sentiment of review - positive/negative/neutral
4. Extract topics that user didnt like about the product.
5. Identify the name of the company, if not then "not mentioned"

Use the following format:
1. Product - <product>
2. Summary - <summary>
3. Sentiment - <user_sentiment>
4. Topics - <negative_topics>
5. Company - <company>
Use JSON format for the output.

Product review: :::{review}:::
"""


response = get_completion(prompt)
print(response)



Enter fullscreen mode Exit fullscreen mode

Output:

{
"Product": "Water Flosser",
"Summary": "The water flosser is effective in removing debris from between teeth, but may take some time to get used to. It has 3 modes of usage and a full tank can last for one usage. The charging point should be covered with the provided plug. The clasp for the water tank could break if not handled properly.",
"Sentiment": "Neutral",
"Topics": "Difficulty in getting used to the product, sensitivity, no significant difference in tartar removal, clasp for water tank could break",
"Company": "not mentioned"
}

From the aforementioned example, we can infer that by explicitly listing the tasks and providing a structured format, we enable ChatGPT to understand and address each task individually.

Furthermore, we can enhance the prompt by including specific conditions or instructions for each task. This allows for a more tailored and accurate response from ChatGPT, as it can take into account the unique requirements and constraints of each task.

Iterative Prompt Development

As we reach the final section of the article, it's crucial to acknowledge that the process of designing and crafting prompt is similar to optimizing/selecting ML models. It is an iterative process, although typically simpler and less complex.

iterative_prompt

Creating effective prompts requires experimentation, observation, and continuous refinement. It's important to iterate and fine-tune the prompts based on the desired output required by the use-cases.

End-notes

In Part 1 of this series, we provided a brief introduction to the foundations of Prompt Engineering - that can get you started on building your own applications.
As we move forward, subsequent parts will delve into various types of techniques and concepts, including LangChain and ChatBots.

Top comments (0)