DEV Community

Cover image for New Llama v2
Dom Sipowicz
Dom Sipowicz

Posted on

New Llama v2

New Llama AI model from Meta is out. It's Open and available for companies to use!

In this post, I answer the following questions:

  1. What is new LLama 2 AI model?
  2. How does the performance of Llama 2 compare to other open-source language models?
  3. What capabilities does Llama 2 possess?
  4. What are the different versions of the Llama 2 model released by Meta?
  5. What are the terms for using the Llama 2 model for commercial use?

In below tweet I compare the new Llama 13B V2 model with current best models on the market from OpenAi and Anthropic.

Here are my takes. On very bottom I will share links I used.

  • LLAMA 2 significantly outperforms any other existing open source language model, across all model sizes.

  • LLAMA 2 is the first open source model that can rival proprietary models like ChatGPT for conversational ability, though it still lags in coding tasks.

  • Meta has released multiple versions of its advanced artificial intelligence model, Pretrained and fine-tuned models are available with 7B, 13B and 70B parameters.

  • The base LLAMA 2 model appears to surpass even GPT-3 in core capabilities, and the fine-tuned conversational models seem on par with ChatGPT. This represents a major advancement for open source language models.

Image description

  • Meta doubled the context length that LLAMA 2 can process to 4,000 tokens, greatly expanding its understanding of long-form text.

  • The model is freely available for commercial use unless your product has over 700 million monthly active users. Access requires submitting a form to download the model from HuggingFace Hub.

  • Llama-v2 is available on Microsoft Azure and will be available on AWS, Hugging Face and other providers

The model has been likely trained for several months and Meta is focusing heavily on trust, accountability, and democratizing AI through open-source. They have made it clear that they do not use user data, thereby avoiding potential issues.

The training corpus for the model comprises a new mix of data from publicly available sources, excluding data from Meta’s own products or services. Efforts have been made to remove data from sites known to contain a high volume of personal information about individuals. This approach strengthens Meta's position as a leader in the field of open-source large language models (LLMs).



PS. Follow me on Twitter or LinkedIn

Top comments (0)