DEV Community

Shan for Arkroot

Posted on • Updated on

The Art of Inferring with Large Language Models

Note: This is a documented version of ChatGPT Prompt Engineering for Developers course. You can find the course here.

Inferring is a powerful tool that can be used to extract meaning from text. It can be used to understand customer feedback, to identify trends in social media.

Here are some examples of how inferring can be used in prompt engineering:

Sentiment analysis: Inferring can be used to identify the sentiment of a piece of text, such as whether it is positive, negative, or neutral. This can be used to understand customer feedback, identify trends in social media, and generate new ideas.

Eg: We can use sentiment analysis to categorize playstore reviews into positive, negative, or neutral reviews. This will help us to prioritise the issues mentioned in the reviews and identify the issues that need to be addressed. This will allow the customer support team to quickly identify and resolve issues, and the engineers to focus on fixing high priority bugs.

Topic modeling: Inferring can be used to identify the topics of a piece of text. This can be used to understand the content of a document, identify patterns in data, and generate new ideas.

Named entity recognition: Inferring can be used to identify named entities in a piece of text, such as people, places, and organizations. This can be used to understand the context of a document, identify potential customers, and generate new ideas.

For example, if you are running an e-commerce website and you have received reviews from consumers who have purchased from your website, you can use a large language model (LLM) to analyse the reviews and identify named entities, such as the shop name and product name. This information can then be used to improve the customer experience, such as by recommending similar products or providing customer support.

Section from the course starts here:

*In this article we will infer sentiment and topics from product reviews and news articles. *

lamp_review = """
Needed a nice lamp for my bedroom, and this one had \
additional storage and not too high of a price point. \
Got it fast.  The string to our lamp broke during the \
transit and the company happily sent over a new one. \
Came within a few days as well. It was easy to put \
together.  I had a missing part, so I contacted their \
support and they very quickly got me the missing piece! \
Lumina seems to me to be a great company that cares \
about their customers and products!!
"""


What is the sentiment of the following product review, 
which is delimited with triple single quotes?

Review text: '''{lamp_review}'''

Enter fullscreen mode Exit fullscreen mode

Output:

Review sentiment analysis

As you can see from the screenshot it's a long output. We can ask the LLM to either output positive / negative as keywords.

"""
What is the sentiment of the following product review, 
which is delimited with triple single quotes?

Give your answer as a single word, either "positive" \
or "negative".

Review text: '''{lamp_review}'''
"""
Enter fullscreen mode Exit fullscreen mode

Output:

Image description

we can ask an LLM to identify the emotions that the writer of a review is expressing. We can also ask the LLM if the writer is expressing any anger in their review. Additionally, we can ask the LLM to identify the company name and product name from the review. All of these tasks can be performed in a single prompt.

"""
Identify the following items from the review text: 
- Sentiment (positive or negative)
- List all the emotions expressed in the review not more than 5
- Is the reviewer expressing anger? (true or false)
- Item purchased by reviewer
- Company that made the item

The review is delimited with triple backticks. \
Format your response as a JSON object with \
"Sentiment", "Anger", Emotions, "Item" and "Brand" as the keys.
If the information isn't present, use "unknown" \
as the value.
Make your response as short as possible.
Format the Anger value as a boolean.

Review text: '''{lamp_review}'''
"""
Enter fullscreen mode Exit fullscreen mode

Analyse contents from text

Next topic in the course topic analysis 🙃.

Inferring can be used to identify the topics of a piece of long text.

story = """
In a recent survey conducted by the government, 
public sector employees were asked to rate their level 
of satisfaction with the department they work at. 
The results revealed that NASA was the most popular 
department with a satisfaction rating of 95%.

One NASA employee, John Smith, commented on the findings, 
stating, "I'm not surprised that NASA came out on top. 
It's a great place to work with amazing people and 
incredible opportunities. I'm proud to be a part of 
such an innovative organization."

The results were also welcomed by NASA's management team, 
with Director Tom Johnson stating, "We are thrilled to 
hear that our employees are satisfied with their work at NASA. 
We have a talented and dedicated team who work tirelessly 
to achieve our goals, and it's fantastic to see that their 
hard work is paying off."

The survey also revealed that the 
Social Security Administration had the lowest satisfaction 
rating, with only 45% of employees indicating they were 
satisfied with their job. The government has pledged to 
address the concerns raised by employees in the survey and 
work towards improving job satisfaction across all departments.
"""

Determine five topics that are being discussed in the \
following text, which is delimited by triple backticks.

Make each item one or two words long. 

Format your response as a list of items separated by commas.

Text sample: '''{story}'''
Enter fullscreen mode Exit fullscreen mode

As you can see from the output we got different topics in the article that we gave to the LLM. This information can be used to recommend articles to users who are interested in those topics.

For example, Suppose if a user is subscribed to the topics "java, javascript, and AI," and the user wants to only receive article suggestions based on his/her subscription the LLM can be used to identify the topic and we could create a service that will send notifications when a new article on any of those topic comes.

Output:

Image description

Top comments (0)