Overview
GPT OSS Safeguard is a safety and policy companion model. It classifies risk, explains the rationale, suggests safe rephrasings, and returns structured JSON decisions for moderation and guardrails.
Description
Safeguard is designed to sit in front of or alongside generators. It ingests prompts and candidate outputs, applies configurable policies, and emits decisions with categories, severities, and action recommendations such as allow, block, redact, or rewrite. It supports rationale text for audits, red-team evaluation hooks, and latency targets suitable for inline use. With stable schemas and tool-call integration, it helps teams enforce safety consistently across assistants, RAG systems, and automation workflows.
About OpenAI
OpenAI is a technology company that specializes in artificial intelligence research and innovation.
Industry:
Research Services
Company Size:
201-500
Location:
San Francisco, California, US
View Company Profile