Definition
The extent to which an LLM's outputs accurately reflect facts and avoid hallucinations.
Detailed Explanation
The extent to which a language model's outputs accurately reflect factual information and avoid generating plausible but incorrect statements (hallucinations).
Use Cases
Evaluating reliability of LLM responses, reducing misinformation generated by AI, improving factuality in AI assistants and search augmentation, AI safety research.
