Definition
The proportion of correct predictions made by a model out of all predictions. A basic metric showing overall model performance.
Detailed Explanation
Accuracy is calculated as (True Positives + True Negatives) / Total Predictions. While straightforward, it can be misleading for imbalanced datasets where one class significantly outnumbers others. This metric assumes all types of errors have equal cost and doesn't distinguish between false positives and false negatives. It ranges from 0 to 1, where 1 indicates perfect predictions.
Use Cases
Common in medical diagnosis models, spam detection, and image classification where class distribution is relatively balanced.