Definition
Systematic errors or unfair prejudices embedded in AI systems due to training data or algorithmic design.
Detailed Explanation
Details how bias can enter AI systems through training data, algorithm design, or deployment contexts. Includes types of bias (selection, confirmation, sampling), detection methods, and mitigation strategies.
Use Cases
Hiring systems, loan approval, criminal justice risk assessment