Which approach best describes an effort to degrade an AI-based system’s classification outcomes by modifying the information it relies on for learning?
Introducing harmful modifications into the data used for training, causing wrong classifications
Running the solution on outdated platforms to weaken its learning routines
Maliciously altering the original data that trains the system is known as poisoning the training set. This approach manipulates the model’s core reference points, leading it to adopt incorrect associations and produce flawed outcomes. Adjusting tuning parameters, using an outdated platform, or compromising logins do not systematically reshape the knowledge base the system depends on, which is central to the harmful technique described in the question.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is data poisoning in AI systems?
Open an interactive chat with Bash
How can organizations protect AI systems from data poisoning?
Open an interactive chat with Bash
Why is poisoning the training data more impactful than other methods of attacking AI systems?