TAAFT
Free mode
100% free
Freemium
Free Trial
Deals

LLaMA

Model family: LLaMA
LLaMA is a dense decoder-only Transformer released for research as a general foundation model. It comes in multiple sizes so labs can study quality versus compute and run smaller variants on modest hardware. Despite modest parameter counts, it showed strong zero and few shot ability and stable context use for its time. Since alignment was minimal, most real deployments fine tuned it for instruction following, safer behavior, and JSON formatting. The release sparked a large ecosystem of derivatives for chat, multilingual work, domain adaptation, and retrieval-augmented apps, making LLaMA a common starting point for efficient, self-hosted assistants.
New Text Gen 4
Released: February 1, 2023

Overview

LLaMA is Meta’s foundation language model released for research. It offers strong zero and few shot ability at multiple sizes, runs efficiently compared with earlier large models, and became a common base for fine tuning and instruction tuned chat systems.

About Meta Platforms

We're connecting people to what they care about, powering new, meaningful experiences, and advancing the state-of-the-art through open research and accessible tooling.

Industry: Technology, Information and Internet
Company Size: 78.000-79.000
Location: Menlo Park, California, US
Website: ai.meta.com
View Company Profile

Tools using LLaMA

Last updated: February 12, 2026
0 AIs selected
Clear selection
#
Name
Task