TAAFT
Free mode
100% free
Freemium
Free Trial
Deals
Create tool

Truthfulness (in LLMs)

[ˈtruːθfʊlnəs ɪn ɛl ɛl ɛmz]
Ethics & Safety
Last updated: April 4, 2025

Definition

The extent to which an LLM's outputs accurately reflect facts and avoid hallucinations.

Detailed Explanation

The extent to which a language model's outputs accurately reflect factual information and avoid generating plausible but incorrect statements (hallucinations).

Use Cases

Evaluating reliability of LLM responses, reducing misinformation generated by AI, improving factuality in AI assistants and search augmentation, AI safety research.

Related Terms