During a test of a newly deployed recommendation service that adapts its responses based on data sets, examiners notice that the results seem unreliable. Which explanation best clarifies how an intruder manipulated its outputs?
They used stolen administrator passwords to view log files in the service's console
They modified cryptographic keys to alter secure connections used by the service
They appended unauthorized lines to the model's executable file after deployment
They replaced some of the curated examples with misleading information during development
Altering the training data introduces misleading patterns that shift outcomes away from what the system's designers intended. Replacing encryption keys or collecting harmless logs would not insert false knowledge into the process. Directly modifying inference code is usually harder to do covertly than tampering with the source data, which more subtly affects the overall model.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does tampering with training data affect machine learning models?
Open an interactive chat with Bash
What are cryptographic keys, and why are they not relevant in this attack?
Open an interactive chat with Bash
Why is modifying inference code less likely than tamparing with training data?