DEV Community

Cover image for DeepMind at Google: Denny Zhou
pravintargaryen
pravintargaryen

Posted on

DeepMind at Google: Denny Zhou

The buzz around Large Language Models (LLMs) is impossible to ignore. If you’ve been keeping up with tech news since 2022, you’ve likely heard the term. But do you really know what LLMs are all about? Despite being marketed as artificial intelligence, LLMs are essentially advanced prediction models trained to excel at guessing the next word or token in a sequence.

Much like Google’s simple search box unlocks access to the world’s knowledge, LLMs have revolutionized AI since their rise to prominence in early 2024. However, beneath their polished outputs lies a fundamental question: Are LLMs truly intelligent, or are they just exceptionally well-trained parrots?

Denny Zhou’s Take

Denny Zhou, a researcher at Google DeepMind, has a grounded perspective. In his recent lecture (https://www.youtube.com/watch?v=QL-FS_Zcmyo), Zhou addressed the reasoning abilities of LLMs, comparing them to teaching a parrot. “It just repeats mostly what we say but with an answer at the end,” he remarked.

Zhou believes AI should mimic human learning, where understanding grows from just a few examples. To illustrate, he shared a charming anecdote about his children:

Kid 1: “What’s 17 times three?”
Kid 2: “I don’t know.”
Kid 1: “What’s 10 times 3?”
Kid 2: “30.”
Kid 1: “What’s 7 times 3?”
Kid 2: “21.”
Kid 1: “Add both of them.”
Kid 2: “51.”

Reflecting on this, his elder child quipped, “Chain-of-thought prompting works on my little brother too!”

The Power and Pitfalls of Chain-of-Thought Reasoning

Zhou highlighted the potential of chain-of-thought prompting, where intermediate reasoning steps between the prompt and response significantly enhance an LLM’s performance. When an LLM reasons step-by-step, its confidence in the final answer increases compared to directly generating a response.

But there’s a caveat. Just like humans, LLMs can be easily distracted by irrelevant context, leading to flawed reasoning. This limitation underscores the complexity of teaching machines to "think."

What Lies Ahead

Denny Zhou is at the forefront of teaching the next generation of LLMs how to reason. His research explores how AI can mimic human-like learning and reasoning, a challenge that could redefine AI as we know it. Zhou’s work doesn’t just speculate; it builds on rigorous experiments and insights, paving the way for future breakthroughs.

For those interested in diving deeper into his research, Zhou’s publications are available at https://dennyzhou.github.io/.

A Final Word

Denny Zhou reminds us: “Always keep in mind that LLMs are probabilistic models of generating next tokens. They are not humans.” His cautionary message emphasizes the need to balance excitement with a realistic understanding of AI’s capabilities and limitations.

Indeed, Denny Zhou’s insights offer a glimpse into the intricate dance between human ingenuity and artificial reasoning. As we navigate this transformative era, one thing is clear—LLMs, while remarkable, are just another step in humanity’s quest to unravel the mysteries of intelligence.

Denny Zhou

Top comments (0)