Definition
Techniques during data collection, training, or post-processing to reduce unfair biases in AI models.
Detailed Explanation
Techniques and strategies employed during data collection, model training, or post-processing to reduce or eliminate unfair biases in AI model predictions and outcomes.
Use Cases
Improving fairness in loan applications, hiring algorithms, facial recognition systems; ensuring equitable outcomes across demographic groups; addressing societal biases reflected in data.
