DEV Community

Cover image for Quick Paper Overview: More Agents Is All You Need
M Sea Bass
M Sea Bass

Posted on

Quick Paper Overview: More Agents Is All You Need

I found this paper fascinating, so I’d like to provide a quick overview of More Agents is All You Need.

1. What's It About?

By leveraging multiple LLMs in a simple sampling-and-voting approach, this study demonstrates significant performance improvements in LLMs. The performance boost is observed across tasks of varying difficulty, with particularly notable gains in more challenging tasks. While both small and large LLMs benefit, smaller models exhibit the most pronounced improvements.

This method can also be combined with existing techniques like Chain-of-Thought (CoT), further enhancing performance in certain tasks.

2. What is Differences from Prior Work

In previous research, methods like CoT-SC applied voting to diverse answers generated by chain-of-thought prompts, particularly for reasoning tasks. This study, however, explores whether simply increasing the number of agents (LLMs) without relying on chain-of-thought prompts or similar techniques can enhance performance. Additionally, it investigates this approach across a broader range of tasks, not just reasoning-based ones.

While some prior work has also explored the use of multiple LLMs, they often relied on supervised learning for ensemble methods. In contrast, this study uses a straightforward voting mechanism, making it a more accessible and easy-to-implement approach.

Furthermore, although previous research has delved into LLM collaboration, this study focuses specifically on the relationship between the number of agents and performance, rather than architectural aspects.

3. What Are the Key Techniques or Methods?

The key contribution of this study is demonstrating that even a very simple voting mechanism can lead to performance improvements. The specific algorithm is as follows:

  1. Generate multiple responses using the LLM, repeating this process several times.
  2. Calculate similarity scores between different responses and sum these scores for each response.
  3. Select the response with the highest total similarity score.

For similarity scoring:

  • In open-ended generation tasks (e.g., code generation), BLEU score is used.
  • In close-ended tasks (e.g., multiple-choice questions), frequency of occurrence is used.

4. How Was Its Effectiveness Validated?

The study evaluated its method across three tasks: Arithmetic Reasoning, General Reasoning, and Code Generation. It also examined the combination of this approach with existing techniques like Chain-of-Thought (CoT). Accuracy was used as the evaluation metric.

The models used in the experiments included Llama-2-13B, Llama-2-70B, and GPT-3.5-Turbo.

The results confirmed that increasing the number of agents consistently improved accuracy. In some tasks, multiple instances of Llama-2-13B outperformed a single Llama-2-70B, and multiple instances of Llama-2-70B surpassed GPT-3.5-Turbo. Additionally, combining this approach with existing techniques like CoT led to further performance gains in certain tasks.

Top comments (0)