Llama Guard 3
Overview
Llama Guard 3 is Meta’s open-weight safety model for moderating AI prompts and responses. It classifies policy risks, returns structured JSON (allow/block with categories and rationale), and is designed to run beside your assistant for low-latency, context-aware guardrails.
About Meta Platforms
We're connecting people to what they care about, powering new, meaningful experiences, and advancing the state-of-the-art through open research and accessible tooling.
Tools using Llama Guard 3
No tools found for this model yet.
