Overview
LLaMA is Meta’s foundation language model released for research. It offers strong zero and few shot ability at multiple sizes, runs efficiently compared with earlier large models, and became a common base for fine tuning and instruction tuned chat systems.
Description
LLaMA is a dense decoder-only Transformer released for research as a general foundation model. It comes in multiple sizes so labs can study quality versus compute and run smaller variants on modest hardware. Despite modest parameter counts, it showed strong zero and few shot ability and stable context use for its time. Since alignment was minimal, most real deployments fine tuned it for instruction following, safer behavior, and JSON formatting. The release sparked a large ecosystem of derivatives for chat, multilingual work, domain adaptation, and retrieval-augmented apps, making LLaMA a common starting point for efficient, self-hosted assistants.
About Facebook
We're connecting people to what they care about, powering new, meaningful experiences, and advancing the state-of-the-art through open research and accessible tooling.
Location:
Orlando, California, US
View Company Profile