Overview
LLaMA 2 is Meta’s second generation open-weight language model family. It ships in base and chat variants across multiple sizes, improves reasoning and coding over LLaMA, and is licensed for broad research and many commercial uses. It is a common starting point for fine-tuned assistants.
Description
LLaMA 2 extends Meta’s foundation models with stronger pretraining, instruction-tuned chat variants, and safety work that makes the chat versions more steerable in practice. The release includes several parameter scales so teams can trade accuracy for speed and cost, plus weights that are easy to quantize for local or server deployment. The chat models are fine-tuned and preference trained to follow instructions, produce concise helpful answers, and avoid unsafe behavior, which reduces the extra prompt engineering that earlier baselines required. In production, LLaMA 2 serves as a flexible base for RAG systems, domain tuning, and lightweight code helpers. Its open availability and active ecosystem of adapters, tokenizers, and inference runtimes make it a popular choice for self-hosted assistants where transparency, control, and predictable serving costs are important.
About Facebook
We're connecting people to what they care about, powering new, meaningful experiences, and advancing the state-of-the-art through open research and accessible tooling.
Location:
Orlando, California, US
View Company Profile