Security Architecture in AI Systems (CY0-001) Flashcards
CompTIA SecAI+ CY0-001 Flashcards

| Front | Back |
| How can AI bias impact system security | Biased AI models may lead to unfair or unexpected decisions, increasing risks or vulnerabilities |
| How can data poisoning attacks compromise AI models | By introducing malicious data to skew training outcomes or performance |
| How can dynamic risk assessments enhance AI security | By continuously evaluating potential threats and adapting security measures accordingly |
| How can encryption enhance AI system security | By safeguarding data at rest and in transit from unauthorized access |
| How can secure APIs improve AI system architecture | By ensuring communication channels prevent unauthorized access or exploitation |
| How can secure model deployment mitigate risks in AI | By implementing safeguards like containerization and runtime security |
| How do adversarial attacks threaten AI models | By exploiting weaknesses in models to alter predictions or outcomes |
| How does continuous integration and deployment (CI/CD) affect AI system security | Automating updates to reduce human errors while ensuring secure practices |
| How does insider threat impact AI system security | Unauthorized actions by trusted users leading to data breaches or exposure |
| How does multi-factor authentication enhance AI system access control | By adding extra security layers beyond just a password |
| How does the principle of defense in depth apply to AI security | Using multiple layers of security measures to protect against threats |
| How does version control support AI security | Tracking changes in models or data pipelines to quickly identify unauthorized modifications |
| What is AI model integrity assurance | Processes to ensure models behave as expected and are not tampered with |
| What is differential privacy in the context of AI | A method to ensure individual data points in datasets remain unidentifiable |
| What is secure data processing in AI systems | Ensuring data is processed safely without exposure or leakage |
| What is supply chain security in AI systems | Safeguarding the integrity of third-party components or dependencies |
| What is the concept of federated learning in AI security | Training models on decentralized data to reduce the risk of data breaches |
| What is the concept of least privilege in AI systems | Restricting user or system permissions to only what is strictly necessary |
| What is the function of data governance in AI systems | Managing data access and usage policies to ensure compliance and security |
| What is the impact of explainable AI (XAI) on security | Improving transparency to identify and mitigate malicious biases or vulnerabilities |
| What is the importance of response playbooks in AI security | Providing structured procedures to handle security incidents effectively |
| What is the main goal of data anonymization | Protecting sensitive information while enabling data use for AI training |
| What is the role of a sandbox environment in AI security | Isolating new AI components for testing to prevent harm to the main system |
| What is the role of access controls in AI system security | Limiting access to sensitive data and resources to authorized individuals only |
| What is the role of audits in AI security frameworks | Verifying compliance with security policies and identifying potential gaps |
| What is the significance of logging in AI systems | Creating a trail of activities for analyzing security incidents |
| Why is model encryption important in AI systems | Protecting AI models against theft or reverse engineering |
| Why is regular patching important for AI system security | Fixing vulnerabilities to prevent exploitation by attackers |
| Why is system monitoring important in AI systems | Detecting anomalies or potential security threats in real-time |
| Why is threat modeling important for AI systems | Identifying and assessing risks to better design security measures |
About the Flashcards
Flashcards for the CompTIA SecAI+ exam organize core AI security terminology, concepts, and practices so students can quickly review what's tested. Cards cover access control and authentication (least privilege, MFA), data protection (encryption, anonymization, differential privacy, federated learning), and model safeguards like model encryption, integrity assurance, and sandboxed deployment, with practical controls to apply in real systems.
The deck also focuses on threat identification and mitigation - adversarial attacks, data poisoning, and threat modeling - plus secure development and lifecycle practices such as secure APIs, CI/CD security, version control, supply chain protection, monitoring, logging, audits, and incident response playbooks.
Topics covered in this flashcard deck:
- Data protection and privacy
- Access control and authentication
- Adversarial attacks and defenses
- Secure development and deployment
- Monitoring, logging and auditing
- Model integrity and governance