DEV Community

Cover image for ChatGPT vs Satya Nadella over Biryani: The chatbot is learning from its mistakes
Abhinav Srivastava
Abhinav Srivastava

Posted on

ChatGPT vs Satya Nadella over Biryani: The chatbot is learning from its mistakes

At the recently held Future Ready Technology Summit in Bengaluru, a disagreement between Microsoft CEO Satya Nadella and ChatGPT over ‘biryani’ being a part of the South Indian tiffin has given a glimpse of how the chatbot can now learn from its mistakes and will apologise for the same as well. For context, when Nadella asked ChatGPT about the South Indian tiffin, OpenAI’s chatbot included biryani in the list. Nadella disagreed that biryani should be among the foods that form the South Indian tiffin’. Interestingly though, ChatGPT was quick to tender an elaborate apology.

“You are correct that Biryani is not a traditional South Indian tiffin. Biryani is a popular rice dish that is believed to have originated in the Indian subcontinent. It is made with spices, vegetables, and meat or seafood, and is often served with raita or yogurt on the side. Biryani is enjoyed all over India, as well as in other parts of South Asia and beyond. It is not typically classified as a tiffin dish in South India, although it is certainly a popular and a beloved dish in the region. I apologise for the mistake in my previous message. Is there anything else I can help with?” read the response from ChatGPT.

The bot also came up with a wholesome response displaying its ability to comprehend the extent of information. Nadella’s demo on ChatGPT’s ability to course-correct itself is promising, however, ever since its introduction there have been numerous instances where the chatbot had failed to offer accurate responses to queries. Let’s take a look at other fails of ChatGPT as well and how there have been course corrections now.

ChatGPT’s basic maths fails
A popular query also tested the chatbot’s maths ability as was demonstrated on Twitter recently . “If I am half my sister’s age and my sister is 10, how old will I be when she’s 40?” – This simple riddle that was prompted by many users met with responses that were far from accurate.

A reddit user u/imaginexus in his long thread said that he asked the riddle some 30 times, however, the chatbot could only get it right once.

While earlier several users had raised alarm on ChatGPT’s lack of accuracy when it came to simple mathematical problems, the chatbot has impressed several educators by explaining and offering examples of theories such as Pythogoras’ theorem and ‘Monty Hall’ problem.

“Pythagoras’ theorem is a way to find the length of a missing side of a right triangle (a triangle with one 90 degree angle). It states that in a right triangle, the square of the length of the hypotenuse (the side opposite the right angle) is equal to the sum of the squares of the other two sides,” the bot responded to a query posed by Paul T. von Hippel, associate professor at University of Texas, as quoted by educationnext.org.

Logical Reasoning And ChatGPT
Another claim that has raised eyebrows around the validity of ChatGPT is its alleged low average IQ as many have pointed to its limitation to grasp context. This poor understanding of context has led to numerous instances where the ChatGPT failed to offer credible answers to queries. Interestingly, the accuracy and context largely depend on how the user asks the questions.

A Twitter user @letsrebelagain shared the responses of ChatGPT on a logical reasoning question. “Bob has two sons. John and Jay. Jay has one brother and father. Father has two sons. Jay’s brother has a brother and a father. Who is Jay’s brother?” read the question.

The user was bewildered to spot a detailed explanation offered by the chatbot on how it was impossible to establish the relationship between the individuals mentioned in the question. In a subsequent prompt, the user asked who Jay’s father was, however, the chatbot responded with the same answer as earlier.

Concerns About Bias
ChatGPT can also assist users with programming and can be an incredibly useful tool to learn and enhance one’s productivity. However, the ease of use, speed and accuracy, has also raised several ethical concerns.

A Twitter user Steven T Piantadosi, who has computational cognitive science written his bio, took to his account to show how ChatGPT may be riddled with bias as simple tricks could easily bypass it’s content filters. The user prompted the chatbot to produce python functions that were based on race and gender. Content moderation may be a long way, but with technology that reinvents itself with each error may spell brighter prospects.

ChatGPT just like any other AI model is learning and unlearning each day. There are numerous AI models that fail miserably when asked to perform simple multiplication tasks. The bot developed by OpenAI has been created to assist with general knowledge and offer information based on what all it has learned. It also lets users to downvote responses, essentially a mechanism that helps it to learn from user engagement and information.

Top comments (0)