Bash, the Crucial Exams Chat Bot
AI Bot
AI-Specific Security Concepts (CY0-001) Flashcards
CompTIA SecAI+ CY0-001 Flashcards
| Front | Back |
| How can AI benefit from zero-trust architecture | By ensuring verification of all entities, reducing the risk of unauthorized model or data access |
| How can AI help in mitigating phishing attacks | AI analyzes email and communication patterns to detect and block phishing attempts in real time |
| How can AI systems be secured against adversarial examples | Through techniques like adversarial training, regularization, and input validation |
| How can differential privacy enhance AI security | By protecting individual data points from exposure while allowing useful model training |
| How can over-reliance on AI be a security risk | Over-reliance may lead to ignoring traditional security measures and blind trust in AI decisions |
| How do CAPTCHA systems leverage AI for security | They use AI to differentiate between human and bot behavior |
| How do GANs (Generative Adversarial Networks) pose security risks | GANs can be used to create deepfakes or generate data for adversarial use cases |
| How do gradient-based methods work in adversarial attacks | They use gradients from machine learning models to craft inputs that lead to incorrect predictions |
| How do model ensembles improve AI security | By combining multiple models to reduce the likelihood of a single point of failure or attack |
| How does explainable AI help in threat mitigation | By providing insights into AI decisions it allows security teams to detect and address vulnerabilities |
| How does Federated Learning improve AI security | By keeping data localized and minimizing exposure during training |
| How does secure multi-party computation enhance AI security | It allows multiple parties to compute functions on their inputs without revealing those inputs |
| What are black-box attacks in AI | Attacks where the adversary has no knowledge of the model’s structure but exploits input-output interactions |
| What are side-channel attacks in the context of AI | Attacks that exploit non-model-specific data like power consumption or timing to extract information about the model |
| What are watermarked AI models | Models embedded with unique markers to track ownership and detect unauthorized use |
| What are white-box attacks in AI | Attacks where the adversary has complete knowledge of the model architecture and parameters |
| What is a membership inference attack in AI | An attack that determines whether a specific record was part of the training dataset |
| What is a model extraction attack | An attack where an adversary attempts to reverse-engineer or replicate the functionality of an AI model by observing input-output pairs |
| What is a reinforcement learning attack | An attack that manipulates the reward signals in reinforcement learning environments to influence the agent's behavior |
| What is a shadow model attack | An attack where an adversary builds a surrogate model to mimic the target model for evaluation or exploitation purposes |
| What is an adversarial attack in AI | An attack designed to exploit vulnerabilities in AI models by manipulating input data to mislead the system |
| What is an AI supply chain attack | An attack that compromises AI software or components during development or distribution |
| What is an evasion attack in AI | An attack where adversaries manipulate inputs at inference time to deceive the AI system |
| What is concept drift and its impact on AI security | The gradual change in data patterns that can make a model ineffective or open to exploitation |
| What is data poisoning in AI | A type of attack where an adversary injects malicious data into the training dataset to compromise the model |
| What is model overfitting and why is it a security risk | Overfitted models may be more vulnerable to adversarial examples and less generalizable |
| What is robust optimization in AI security | Methods to design models resistant to adversarial attacks and uncertainty during training |
| What is the concept of "AI bias" in security | Systematic prejudice in AI outputs caused by flawed or unbalanced training data |
| What is the impact of improperly shared pre-trained models in AI | They may contain embedded vulnerabilities or lead to data leakage risks |
| What is the principle of least privilege in AI design | Granting AI systems only the minimum access necessary to perform their tasks |
| What is the purpose of input sanitization in AI systems | To prevent malicious or problematic data inputs from influencing AI model behavior |
| What is the role of AI in detecting insider threats | AI analyzes patterns of user behavior to identify potentially malicious or abnormal activities |
| What is the role of encryption in AI data protection | Encryption secures training and inference data from unauthorized access |
| What is the role of feature engineering in preventing vulnerabilities | Proper feature selection can reduce the risk of input manipulation attacks |
| What is transfer learning and its potential security risks | Reusing pre-trained models can introduce vulnerabilities from the original dataset |
| What role does anomaly detection play in AI security | It helps identify unusual patterns that may indicate attacks or other security issues |
| Why are secure enclaves important for AI security | They protect sensitive computations and data from tampering during model execution |
| Why is access control critical in AI security | It limits unauthorized access to sensitive AI models and datasets |
| Why is adversarial training used in AI security | To improve an AI model's robustness by training it on adversarial examples |
| Why is model interpretability important in AI security | It helps identify and mitigate vulnerabilities by understanding how AI makes decisions |
This deck covers critical security principles related to artificial intelligence, including machine learning vulnerabilities, adversarial attacks, and safe AI design practices.