Definition
A measurement of how well a probability model predicts a sample, commonly used in language models.
Detailed Explanation
Perplexity is the exponential of the average negative log-likelihood per word. Lower perplexity indicates the model is better at predicting the sample. It can be interpreted as the weighted average branching factor of a language model.
Use Cases
Used in evaluating language models, speech recognition systems, and text generation models.
