TAAFT
Free mode
100% free
Freemium
Free Trial
Deals

Data Poisoning

[ˈdeɪtə ˈpɔɪzənɪŋ]
Ethics & Safety
Last updated: December 9, 2024

Definition

A type of attack where an adversary manipulates training data to compromise an AI model's performance or behavior.

Detailed Explanation

Data poisoning attacks involve strategically modifying training data or injecting malicious samples to influence model learning. This can result in backdoors biased predictions or general performance degradation. The attack can be targeted (affecting specific outputs) or untargeted (generally degrading performance).

Use Cases

Testing model resilience in autonomous systems Evaluating healthcare diagnostic models Protecting recommendation systems Securing financial fraud detection models

Related Terms