Definition
The proportion of true positive predictions compared to all positive predictions made. Shows how many of the items identified as positive are actually positive.
Detailed Explanation
Precision = True Positives / (True Positives + False Positives). This metric is crucial when false positives are costly. High precision indicates a low false positive rate, meaning when the model predicts the positive class, it's usually correct. However, it doesn't account for false negatives.
Use Cases
Used in spam detection (avoiding marking legitimate emails as spam), fraud detection systems, and medical screening where false positives are costly.