Definition
Systematic errors in AI systems that result in unfair treatment of certain groups or individuals.
Detailed Explanation
AI bias refers to systematic prejudices in machine learning models that can arise from training data, algorithm design, or implementation choices. These biases can reflect and amplify existing societal prejudices, leading to discriminatory outcomes. Sources include historical data bias, sampling bias, measurement bias, and algorithmic bias. The impact can manifest in various ways, from facial recognition systems performing poorly for certain demographics to lending algorithms discriminating against specific groups.
Use Cases
Credit scoring systems evaluating loan applications, hiring algorithms screening job candidates, facial recognition systems in law enforcement, healthcare diagnosis systems