Bash, the Crucial Exams Chat Bot
AI Bot

AI-Specific Security Concepts (CY0-001)  Flashcards

CompTIA SecAI+ CY0-001 Flashcards

FrontBack
How can AI benefit from zero-trust architectureBy ensuring verification of all entities, reducing the risk of unauthorized model or data access
How can AI help in mitigating phishing attacksAI analyzes email and communication patterns to detect and block phishing attempts in real time
How can AI systems be secured against adversarial examplesThrough techniques like adversarial training, regularization, and input validation
How can differential privacy enhance AI securityBy protecting individual data points from exposure while allowing useful model training
How can over-reliance on AI be a security riskOver-reliance may lead to ignoring traditional security measures and blind trust in AI decisions
How do CAPTCHA systems leverage AI for securityThey use AI to differentiate between human and bot behavior
How do GANs (Generative Adversarial Networks) pose security risksGANs can be used to create deepfakes or generate data for adversarial use cases
How do gradient-based methods work in adversarial attacksThey use gradients from machine learning models to craft inputs that lead to incorrect predictions
How do model ensembles improve AI securityBy combining multiple models to reduce the likelihood of a single point of failure or attack
How does explainable AI help in threat mitigationBy providing insights into AI decisions it allows security teams to detect and address vulnerabilities
How does Federated Learning improve AI securityBy keeping data localized and minimizing exposure during training
How does secure multi-party computation enhance AI securityIt allows multiple parties to compute functions on their inputs without revealing those inputs
What are black-box attacks in AIAttacks where the adversary has no knowledge of the model’s structure but exploits input-output interactions
What are side-channel attacks in the context of AIAttacks that exploit non-model-specific data like power consumption or timing to extract information about the model
What are watermarked AI modelsModels embedded with unique markers to track ownership and detect unauthorized use
What are white-box attacks in AIAttacks where the adversary has complete knowledge of the model architecture and parameters
What is a membership inference attack in AIAn attack that determines whether a specific record was part of the training dataset
What is a model extraction attackAn attack where an adversary attempts to reverse-engineer or replicate the functionality of an AI model by observing input-output pairs
What is a reinforcement learning attackAn attack that manipulates the reward signals in reinforcement learning environments to influence the agent's behavior
What is a shadow model attackAn attack where an adversary builds a surrogate model to mimic the target model for evaluation or exploitation purposes
What is an adversarial attack in AIAn attack designed to exploit vulnerabilities in AI models by manipulating input data to mislead the system
What is an AI supply chain attackAn attack that compromises AI software or components during development or distribution
What is an evasion attack in AIAn attack where adversaries manipulate inputs at inference time to deceive the AI system
What is concept drift and its impact on AI securityThe gradual change in data patterns that can make a model ineffective or open to exploitation
What is data poisoning in AIA type of attack where an adversary injects malicious data into the training dataset to compromise the model
What is model overfitting and why is it a security riskOverfitted models may be more vulnerable to adversarial examples and less generalizable
What is robust optimization in AI securityMethods to design models resistant to adversarial attacks and uncertainty during training
What is the concept of "AI bias" in securitySystematic prejudice in AI outputs caused by flawed or unbalanced training data
What is the impact of improperly shared pre-trained models in AIThey may contain embedded vulnerabilities or lead to data leakage risks
What is the principle of least privilege in AI designGranting AI systems only the minimum access necessary to perform their tasks
What is the purpose of input sanitization in AI systemsTo prevent malicious or problematic data inputs from influencing AI model behavior
What is the role of AI in detecting insider threatsAI analyzes patterns of user behavior to identify potentially malicious or abnormal activities
What is the role of encryption in AI data protectionEncryption secures training and inference data from unauthorized access
What is the role of feature engineering in preventing vulnerabilitiesProper feature selection can reduce the risk of input manipulation attacks
What is transfer learning and its potential security risksReusing pre-trained models can introduce vulnerabilities from the original dataset
What role does anomaly detection play in AI securityIt helps identify unusual patterns that may indicate attacks or other security issues
Why are secure enclaves important for AI securityThey protect sensitive computations and data from tampering during model execution
Why is access control critical in AI securityIt limits unauthorized access to sensitive AI models and datasets
Why is adversarial training used in AI securityTo improve an AI model's robustness by training it on adversarial examples
Why is model interpretability important in AI securityIt helps identify and mitigate vulnerabilities by understanding how AI makes decisions
This deck covers critical security principles related to artificial intelligence, including machine learning vulnerabilities, adversarial attacks, and safe AI design practices.
Share on...
Follow us on...