ISC2 Systems Security Certified Practitioner (SSCP) Practice Test
Use the form below to configure your ISC2 Systems Security Certified Practitioner (SSCP) Practice Test. The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

ISC2 Systems Security Certified Practitioner (SSCP) Information
About the SSCP
The Systems Security Certified Practitioner (SSCP) credential from ISC2 is aimed at hands-on IT and security professionals—systems administrators, network engineers, analysts and similar roles—who want vendor-neutral proof that they can implement, monitor and secure enterprise infrastructure. To sit for the exam you need just one year of cumulative, paid work in any of the seven SSCP domains, or you can earn “Associate of ISC2” status and finish the experience requirement within two years. This low-friction entry point, plus ANSI/ISO 17024 accreditation and U.S. DoD 8140.03 approval, makes the SSCP an attractive stepping-stone toward more senior certs such as the CISSP.
What’s Inside the Latest SSCP Exam
After a 2024 job-task analysis, ISC2 moved the SSCP to Computerized Adaptive Testing on October 1 2025. The new format dynamically selects 100-125 multiple-choice or advanced items and gives you two hours to reach a 700/1000 cut score.
The seven domains are:
- Security Concepts & Practices (16 %)
- Access Controls (15 %)
- Risk Identification, Monitoring & Analysis (15 %)
- Incident Response & Recovery (14 %)
- Cryptography (9 %)
- Network & Communications Security (16 %)
- Systems & Application Security (15 %)
Adaptive delivery tightens exam security, shortens seat time and focuses questions on your demonstrated ability.
SSCP Practice Exams
Working through full-length practice tests is one of the most effective ways to convert study hours into passing scores. Timed drills condition you to manage a two-hour adaptive session, while score reports reveal domain-level gaps you can attack with flash cards or lab work. ISC2’s own self-paced training now bundles “practical assessments” that mirror live-exam item types; third-party banks from publishers such as Pearson or Skillsoft add even more question variety. Candidates who cycle through several mocks consistently report higher confidence, steadier pacing and fewer surprises on test day.
Exam Preparation Tips
Plan on at least six weeks of structured study: review the official exam outline, lab the high-weight domains (especially access control and network security), and join an online study group for peer explanations. On exam day, remember that CAT will stop early if it is statistically sure of your pass/fail status—so stay calm if the question count feels short. Above all, keep learning light but continuous; as recent SSCP holders note, “be calm and patient…connect with those who have passed to motivate yourself and learn from their experiences.”

Free ISC2 Systems Security Certified Practitioner (SSCP) Practice Test
- 20 Questions
- Unlimited time
- Security Concepts and PracticesAccess ControlsRisk Identification, Monitoring and AnalysisIncident Response and RecoveryCryptographyNetwork and Communication SecuritySystems and Application Security
Free Preview
This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!
Your company is migrating several workloads to AWS and must prove ongoing compliance with both the CIS AWS Foundations Benchmark and PCI DSS. The security team wants a single managed service that automatically runs configuration checks across all AWS accounts and Regions, provides a real-time centralized dashboard showing pass/fail status for each control, and can forward findings to the corporate ticket-tracking system with minimal custom code. Which AWS service best satisfies these requirements?
AWS Security Hub
AWS Trusted Advisor
AWS Artifact
AWS Config conformance packs with custom rules
Answer Description
AWS Security Hub includes built-in security standards such as the CIS AWS Foundations Benchmark and PCI DSS. When enabled across multiple accounts and Regions, it continuously runs automated configuration checks by using AWS Config rules, aggregates the results centrally, and displays each control's compliance status in a single dashboard. Security Hub can also integrate natively with Amazon EventBridge to route findings to ticketing or SOAR systems, enabling remediation workflows without extensive custom development.
AWS Config conformance packs can evaluate resources against specific rules, but they do not provide the same cross-account, multi-Region aggregation, prebuilt PCI reporting, or native finding forwarding features. AWS Trusted Advisor offers best-practice checks but does not map results directly to compliance standards or aggregate findings across accounts. AWS Artifact supplies downloadable audit reports and agreements but does not perform continuous technical checks of your environment.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Security Hub and how does it support compliance with standards like CIS AWS Foundations Benchmark and PCI DSS?
How does AWS Config and AWS Security Hub work together to enable automated compliance checks?
What are the main differences between AWS Security Hub and AWS Trusted Advisor?
Your company runs regulated workloads on AWS. To improve incident readiness, the security team plans a tabletop exercise simulating a cross-region ransomware attack on S3 objects. Which preparatory action is most critical to ensure the discussion stays focused, captures all viewpoints, and meets the stated learning objectives?
Omit written notes and recordings to minimise legal discovery risks after the session.
Replace the tabletop with a live red-team engagement against the production environment.
Assign an impartial facilitator to guide the discussion and keep it aligned with the scenario timeline.
Distribute a pre-read of last quarter's audit report so participants discuss general control gaps.
Answer Description
A tabletop exercise is a facilitated discussion, so designating an impartial facilitator is essential. The facilitator keeps the scenario on schedule, prompts participation from every role, and steers the conversation toward the predefined objectives. Pre-reads are helpful but do not ensure focus, a red-team engagement is a different test type, and avoiding documentation defeats one of the main benefits-capturing lessons learned for future improvement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a tabletop exercise in cybersecurity?
Why is an impartial facilitator critical for a tabletop exercise?
How does a tabletop exercise differ from a red-team engagement?
Your organization hosts a microservices workload in a single AWS account. Developers push code to an AWS CodeCommit repository, AWS CodeBuild compiles the artifacts, and AWS CodeDeploy releases them to production. A recent audit mandates that individuals who write code must not be able to promote it to production. Which solution best enforces this segregation of duties using only native AWS capabilities?
Define two IAM roles: a Developer role allowed to push to CodeCommit and invoke CodeBuild, and a ReleaseManager role allowed only to approve a Manual Approval action placed between the build and deploy stages in CodePipeline. Team members assume only their designated role.
Attach AdministratorAccess policy to all developers but require CodeCommit pull-request reviews before merging to the production branch.
Enable AWS CloudTrail and Amazon GuardDuty to detect and alert on any unauthorized deployment events after they occur.
Use one least-privileged IAM role for both development and deployment, but mandate MFA and strong passwords for every pipeline action.
Answer Description
Segregation of duties is achieved by ensuring that the person who creates or changes code cannot by themselves place that code into production. Inserting a Manual Approval action in AWS CodePipeline and assigning that action's permissions to a separate IAM role reserved for release managers cleanly separates build and deployment responsibilities. Options that give developers AdministratorAccess, rely solely on peer review, use one shared role with MFA, or depend only on detective controls such as CloudTrail and GuardDuty do not provide the required preventative separation because developers would still be able to deploy or there would be no enforcing control at the deployment gate.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is IAM, and why is it important in AWS?
What is AWS CodePipeline, and how does Manual Approval work within it?
How do GuardDuty and CloudTrail differ from preventive controls like IAM roles in AWS?
An e-commerce site on AWS uses an Application Load Balancer in front of an Auto Scaling group. CloudWatch shows requests jump from 2 000 to 150 000 per minute, originating from thousands of global IP addresses. EC2 CPU utilization reaches 100 percent and customers receive 504 timeout errors. No code changes or credential misuse are detected. Which type of malicious activity best explains this behavior?
An advanced persistent threat conducting low-and-slow data exfiltration
An insider threat using privileged access to disrupt the service
A zero-day exploit enabling remote code execution on the EC2 instances
A distributed denial-of-service (DDoS) attack against the application
Answer Description
A distributed denial-of-service (DDoS) attack attempts to exhaust a target's compute or network capacity by overwhelming it with a flood of traffic from many geographically dispersed sources. The sudden spike to 150 000 requests per minute, the large number of different source IP addresses, and the resulting resource exhaustion and service outages precisely match DDoS characteristics.
An insider threat would typically involve actions taken with legitimate, internal credentials and would not require traffic from thousands of external IPs. A zero-day exploit focuses on taking advantage of an unknown software vulnerability to gain unauthorized access or execute code; it does not inherently generate massive, distributed traffic surges. An advanced persistent threat (APT) relies on stealth and persistence to exfiltrate data over time and deliberately avoids causing noticeable service disruption. Therefore, the observed symptoms most closely align with a DDoS attack.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a DDoS attack?
How does AWS mitigate DDoS attacks?
What is the role of an Application Load Balancer during a DDoS attack?
Your organization follows the NIST Risk Management Framework (RMF) for a newly migrated e-commerce workload on AWS. After implementing and authorizing all selected controls, you must now address the RMF "Monitor" step, which calls for continuous assessment of control effectiveness and automated risk reporting. Which AWS solution best fulfills this requirement by running compliance checks against industry standards and aggregating findings in a single dashboard?
Use Amazon Macie to scan S3 buckets and alert on sensitive data exposure.
Deploy AWS Config conformance packs and ingest their findings into AWS Security Hub for centralized compliance monitoring.
Enable AWS CloudTrail and store logs in Amazon S3, then run ad-hoc Athena queries for control verification.
Activate Amazon GuardDuty across all accounts and regions to detect threats in real time.
Answer Description
The RMF Monitor step requires ongoing control assessments and timely risk reporting. AWS Config conformance packs continuously evaluate resource configurations against predefined rule sets, while AWS Security Hub consumes the resulting findings, correlates them with other AWS service detections, and displays consolidated compliance scores mapped to frameworks such as NIST CSF, PCI DSS, and CIS. This combination delivers the automated, centralized monitoring expected by the RMF. CloudTrail with S3 logs, GuardDuty alone, or Macie alone provide valuable security data but do not automatically compare configurations against control baselines or generate framework-aligned compliance reports.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the NIST Risk Management Framework (RMF)?
What are AWS Config conformance packs?
How does AWS Security Hub aid in compliance monitoring?
A security administrator manages multiple production AWS accounts. Compliance mandates a detective control that records every API call, including security-group modifications, and stores the logs centrally for at least 90 days. Investigators must be able to identify the calling IAM principal and its source IP address. Which AWS service combination MOST effectively satisfies this requirement?
Create an organization-wide, multi-region AWS CloudTrail trail and deliver the logs to a centralized Amazon S3 bucket with 90-day retention.
Turn on Amazon GuardDuty in every account and forward findings to EventBridge for centralized storage.
Activate AWS Config rules that detect security-group changes and store configuration snapshots in an S3 bucket.
Enable Amazon VPC Flow Logs for all VPCs and stream the logs to CloudWatch Logs for analysis and retention.
Answer Description
AWS CloudTrail is the detective control designed to record every API call made within an AWS account. A multi-region, organization-wide trail delivered to an S3 bucket preserves log files well beyond 90 days and captures details such as the IAM identity that made the call, the source IP address, request parameters, and response elements. VPC Flow Logs only capture network flow metadata and cannot show who invoked a change to a security group. AWS Config records resource configuration states and can trigger alerts, but it does not include the full caller identity or originating IP for every API call. Amazon GuardDuty generates threat-detection findings from several data sources; while useful, it does not provide a complete, immutable log of every API operation. Therefore, enabling AWS CloudTrail with centralized S3 storage best meets the stated detective-control requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does AWS CloudTrail provide centralized logging?
What is the difference between AWS CloudTrail and AWS Config?
What details can investigators extract from AWS CloudTrail logs?
A financial services firm is migrating several internal tools to AWS. Compliance policy requires that anyone connecting to the AWS Management Console or to an EC2 bastion host must first see a reminder that all activities are being monitored and that unauthorized access can lead to prosecution. Which control BEST satisfies this requirement as a deterrent measure without directly enforcing or detecting violations?
Configure an account-level log-on banner for the AWS Management Console and a pre-login SSH warning message on the bastion host.
Require all administrators to use multi-factor authentication (MFA) before accessing the console or bastion host.
Enable AWS CloudTrail for all accounts and send real-time IAM authentication events to an Amazon SNS topic monitored by security operations.
Restrict console and SSH access to whitelisted corporate IP addresses using VPC network ACLs and IAM condition keys.
Answer Description
A deterrent control discourages inappropriate behavior by reminding potential violators of monitoring or penalties, rather than physically preventing or detecting actions. Displaying a log-on warning banner (for both the AWS Management Console and SSH sessions) clearly informs users that their actions are tracked and that misuse has legal consequences, which can discourage unauthorized activity.
- Implementing MFA or network ACLs are preventive controls because they block access unless specific conditions are met.
- Enabling CloudTrail with SNS alerts is primarily a detective and corrective measure, identifying and responding to events after they occur.
Therefore, only a prominently displayed log-on warning banner fulfills the stated compliance need for a deterrent control.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a deterrent control in cybersecurity?
How does an AWS log-on warning banner work?
What is a pre-login SSH warning message, and how is it configured?
Your company stores project deliverables in an Amazon S3 bucket. A court issues a litigation hold on a subset of those objects. The bucket is version-enabled and has a lifecycle rule that moves objects to S3 Glacier after 30 days; developers currently have permission to delete objects. To satisfy eDiscovery preservation requirements, you must ensure the specified data cannot be altered or removed while keeping administrative overhead low. Which action provides the most appropriate solution?
Attach an IAM policy that denies all users the s3:DeleteObject action on the bucket and enable CloudTrail logging.
Copy the objects to an on-premises read-only file server and delete them from the S3 bucket to prevent changes.
Suspend the bucket's lifecycle policy and rely on S3 versioning to recover any objects that might be deleted.
Enable S3 Object Lock in Compliance mode on the affected objects and apply a legal hold until the litigation is cleared.
Answer Description
Amazon S3 Object Lock lets you place objects in a write-once-read-many (WORM) state. Enabling Object Lock in Compliance mode means no user, not even the root account, can modify or delete protected objects until the retention period or legal hold is cleared, meeting strict litigation-hold and eDiscovery requirements. Adding a legal hold flag allows indefinite protection without setting a retention expiry. Merely suspending lifecycle policies or relying on versioning still allows privileged users to delete object versions, violating preservation obligations. Copying data on-premises and deleting it from S3 breaks chain-of-custody and complicates discovery. An IAM deny statement prevents deletes but cannot block overwrites and is reversible by an administrator, so it lacks the immutability assurances required for legal holds.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon S3 Object Lock and how does Compliance mode work?
What is the difference between S3 Object Lock Legal Hold and Retention Period?
Why are IAM policies and versioning insufficient for eDiscovery requirements?
During an urgent incident, your manager instructs you to disable server-side encryption on an Amazon S3 bucket that stores customers' personally identifiable information so a legacy analytics job can finish more quickly. As the only SSCP on the team, which response BEST upholds the (ISC)² Code of Ethics canons?
Proceed with the request but enable detailed AWS CloudTrail logging to detect any misuse of the unencrypted data.
Comply immediately because meeting the business deadline is the highest priority once management has accepted the risk.
Refuse to remove encryption and recommend an alternative solution that maintains protection of customer data while supporting the time-critical job.
Perform the change but record the manager's approval in the change log to maintain accountability.
Answer Description
Disabling encryption would lower the confidentiality of customer PII and jeopardize the public's trust in the organization's services. The first (ISC)² Code of Ethics canon requires members to protect society, the commonwealth, and the infrastructure; the second demands they act legally and responsibly. Simply obeying, even with logging, still violates these canons, while enabling extra monitoring after removing encryption does not mitigate the fundamental risk. The most ethical course is to refuse the insecure request and offer an alternative that preserves encryption yet meets business objectives, such as using temporary performance tuning or parallel encrypted buckets.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is encryption important for protecting Personally Identifiable Information (PII)?
What does the (ISC)² Code of Ethics canon require in this scenario?
What alternative solutions can preserve encryption while addressing urgent analytics jobs?
You are the SSCP on call for an AWS-hosted SaaS workload. A senior developer asks you to make an unencrypted RDS snapshot of a tenant's production database and share it to their personal account so they can debug a feature before tomorrow's release. The tenant's contract and internal policy both prohibit disclosure without written approval. According to the (ISC)² Code of Ethics canons, what is the most appropriate first action?
Comply because faster debugging improves system availability for all tenants.
Encrypt the snapshot and share it only with the developer's personal AWS account to limit exposure.
Refuse to share the snapshot and promptly escalate the request through the company's compliance or management channel.
Anonymize sensitive columns in the snapshot, then send it to the developer as a compromise.
Answer Description
The Code of Ethics mandates protecting society and the public, acting honorably, honestly, justly, responsibly, and legally, and providing diligent service to principals. Sharing customer data without written consent would violate contractual and legal obligations and jeopardize confidentiality, so the request must be refused. The incident should be promptly reported or escalated through the organization's compliance or management channels. Encrypting, partially anonymizing, or sharing to a personal account still constitutes unauthorized disclosure and does not satisfy the canons.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the (ISC)² Code of Ethics?
What is an RDS snapshot and how does it relate to security?
Why is escalated compliance important in ethical security decision-making?
An enterprise operating multiple AWS accounts wants to establish stronger governance and tasks an SSCP-certified practitioner with writing a Cloud Acceptable Use Policy that will serve as an administrative security control complementing existing technical safeguards. Following industry guidance for security policies, which type of information should the practitioner emphasize in the policy?
High-level statements of management intent that define acceptable and unacceptable behavior when using organizational and cloud resources.
Detailed step-by-step procedures for configuring AWS Identity and Access Management (IAM) roles and policies.
Specific metrics and thresholds required to trigger auto-scaling actions for production workloads.
An exhaustive inventory of every S3 bucket and its encryption status, updated weekly.
Answer Description
Security policies are administrative controls that communicate management's intent and expectations. They are concise, high-level statements that describe allowed and prohibited activities, set overall direction, and assign authority. Policies do not contain the granular how-to steps found in procedures, the numeric thresholds found in standards or baselines, or constantly changing asset inventories. Therefore, focusing the Cloud Acceptable Use Policy on broad directives that define acceptable and unacceptable behavior is the correct approach, while the other options describe content better suited for supporting documents such as procedures, standards, or operational inventories.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an administrative security control?
How does a Cloud Acceptable Use Policy differ from technical safeguards?
Why are policies considered high-level statements and not detailed instructions?
Your company is launching a customer-facing REST API on AWS. During the architecture review, you must show which design decision specifically addresses the availability element of the CIA triad. The workload uses Amazon EC2 instances behind an Application Load Balancer. Which of the following choices BEST demonstrates that the API will remain accessible and responsive during component failures?
Enable AWS CloudTrail and store the logs in an S3 bucket with Object Lock to preserve evidence for forensic investigations.
Protect the Application Load Balancer with AWS WAF configured to block SQL injection and cross-site scripting attacks.
Deploy EC2 instances across two Availability Zones, register them with the load balancer's target group, and enable Auto Scaling health checks to replace unhealthy instances automatically.
Require TLS 1.2 for all client connections to encrypt traffic between clients and the API endpoints.
Answer Description
Availability is concerned with ensuring that authorized users have timely and reliable access to systems and data. Placing EC2 instances in multiple Availability Zones and enabling health-check-based Auto Scaling provides redundancy and automatic failover, so the service continues to operate if one instance or an entire AZ becomes unavailable. Requiring TLS 1.2 protects data in transit and supports confidentiality and integrity, not availability. AWS WAF rules mitigate common web attacks, contributing primarily to integrity and confidentiality. Storing CloudTrail logs with Object Lock supports accountability and non-repudiation rather than keeping the API reachable. Therefore, distributing resources across AZs with automatic recovery is the option that best fulfills the availability objective.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of Auto Scaling health checks in AWS?
What are Availability Zones in AWS and how do they improve service reliability?
How does an Application Load Balancer contribute to availability in AWS?
During a recent security incident review, you discover that several employees reset their MFA credentials after receiving voice calls that appeared to come from the corporate help-desk phone number. Because the company now allows soft-phone use on personal smartphones, you must update the security awareness training to reduce the risk of future vishing attacks without disrupting legitimate support interactions. Which of the following guidance should you emphasize?
Ask the caller to verify legitimacy by sending a confirmation text message from the same phone number before proceeding.
Enable mobile carrier spam-call filtering on all employee devices to automatically block unrecognized numbers.
Configure soft-phones to accept calls only from internal extensions and direct all other calls to voicemail for later review.
Hang up and call the official help-desk number listed in the corporate directory before acting on any request received by phone.
Answer Description
Vishing relies on spoofed Caller ID and social engineering to trick victims into revealing sensitive information or performing actions such as password resets. The most effective user-level countermeasure is to teach employees to independently verify any unsolicited support request. Hanging up and dialing the official help-desk number published on the company intranet or ID badge forces attackers to lose control of the channel and prevents them from exploiting spoofed numbers. Asking callers to send an SMS does not guarantee authenticity, since SMS can also be spoofed. Carrier spam-call blocking reduces nuisance calls but cannot stop targeted spoofed numbers. Refusing all non-internal calls would hinder business operations and is impractical. Therefore, instructing users to perform a trusted call-back is the best balance of security and usability.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is vishing and how is it different from phishing?
How does Caller ID spoofing work and why is it dangerous?
What practical steps can employees take to identify and prevent vishing attacks?
Your team is building a serverless payment processing API on AWS. Compliance requirements demand proof that any request recorded in the system can later be cryptographically tied to the identity that submitted it, preventing that user from denying the action. Which approach best meets this non-repudiation requirement while aligning with AWS best practices?
Encrypt all data in S3 using SSE-S3 and enable object lock to prevent deletion.
Store API request details in Amazon DynamoDB and replicate the table across regions with global tables for high availability.
Protect the API with AWS WAF and enable AWS Shield Standard to block malicious traffic.
Enable AWS CloudTrail for the account and turn on log file integrity validation; store the logs in an S3 bucket with versioning and MFA Delete.
Answer Description
Non-repudiation requires evidence that an action was performed by a specific principal and that the evidence cannot be altered without detection. Enabling AWS CloudTrail captures every API call together with the caller's identity and signs each log file with a SHA-256 hash and public-key signature. Turning on CloudTrail's log file integrity validation lets you verify that logs have not been modified, while S3 versioning and MFA Delete protect the evidence from tampering or deletion. AWS WAF and Shield improve availability but do not bind actions to identities. Server-side encryption and Object Lock secure stored objects but do not record who performed an action or provide cryptographic validation of logs. Replicating data in DynamoDB enhances durability and availability, not non-repudiation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS CloudTrail and how does it support non-repudiation?
What is log file integrity validation in AWS CloudTrail?
How does S3 versioning and MFA Delete protect log files?
An SSCP serving as the cloud change manager receives a request to add a new cross-account bucket policy to an existing Amazon S3 bucket that stores customer purchase records. Prior to approving the RFC, the SSCP must conduct the security impact analysis. Which action will provide the MOST relevant information for this analysis?
Estimate additional monthly storage and data-transfer charges with AWS Pricing Calculator to confirm budget impact.
Apply new cost-allocation and owner tags to the bucket to ensure accurate reporting in inventory exports.
Use IAM Access Analyzer to simulate the proposed bucket policy and list any external principals that would receive access, then review the results against the bucket's classification.
Perform load testing with CloudWatch metrics to verify object retrieval latency after the policy change.
Answer Description
Running IAM Access Analyzer shows exactly which external AWS principals would gain access if the new bucket policy is applied, allowing the SSCP to compare the resulting access with the bucket's data classification and least-privilege requirements. Performance testing, cost estimates, and tagging may be useful for other evaluations, but they do not directly reveal the security implications of expanded cross-account access, making them less pertinent to the impact analysis.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is IAM Access Analyzer?
What is a cross-account bucket policy in S3?
What is data classification, and why is it important in security impact analysis?
Your organization runs production workloads in AWS and must prove during quarterly security audits that every change to resource configurations (for example, S3 bucket ACLs and security-group rules) has been recorded and automatically evaluated against the company's approved baseline for the past 12 months. Which AWS service should you enable to most effectively meet this compliance-verification requirement?
AWS Config with conformance packs
Amazon CloudWatch Logs
Amazon GuardDuty
AWS CloudTrail Event history
Answer Description
AWS Config continuously records the configuration state of supported AWS resources, stores historical snapshots for as long as you choose, and evaluates each change against rules or conformance packs that map to organizational or regulatory requirements. This enables auditors to verify that resources remained compliant (or to see exactly when they drifted) throughout the requested 12-month period.
- CloudTrail records API calls but not the resulting resource configurations and retains only 90 days of event history by default.
- GuardDuty focuses on threat detection and does not perform configuration compliance checks.
- CloudWatch Logs ingests log data but does not automatically track or evaluate resource configurations. Therefore, AWS Config with conformance packs is the most appropriate choice.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are AWS Config conformance packs?
How does AWS Config differ from AWS CloudTrail?
What is the purpose of Amazon GuardDuty if it doesn't track compliance?
An SSCP is tasked with designing a seven-year archive for monthly financial transaction logs stored in AWS. The solution must ensure that each log file is immutable after it is written, encrypted at rest, cost-efficient for long-term retention, and still retrievable within 12 hours to satisfy audit requests. Which approach best meets these secure long-term storage requirements?
Store compressed logs in Amazon S3 Glacier Deep Archive, enable S3 Object Lock in compliance mode, and apply server-side encryption with AWS KMS-managed keys.
Configure AWS Backup to copy logs into a warm storage vault with a seven-year retention policy and cross-Region replication.
Upload logs to an Amazon S3 Standard bucket with versioning enabled and default server-side encryption (SSE-S3).
Retain logs on encrypted Amazon EBS volumes attached to a stopped EC2 instance and take annual snapshots for seven years.
Answer Description
Amazon S3 Glacier Deep Archive is AWS's lowest-cost storage class for long-term retention. When combined with S3 Object Lock in compliance mode, each object is placed in a write-once, read-many (WORM) state that cannot be altered or deleted until the retention period expires, meeting immutability requirements. Server-side encryption with KMS keys ensures that data remains encrypted at rest. Standard retrieval from Glacier Deep Archive is typically available within 12 hours, which satisfies the audit retrieval window at minimal cost.
Storing data on encrypted EBS volumes, even if snapshots are taken, does not provide WORM protection and incurs higher ongoing storage costs. Keeping data in S3 Standard with versioning is far more expensive over seven years and versioning alone does not prevent tampering. AWS Backup warm storage offers longer retention, but it is priced higher than Glacier Deep Archive and does not inherently enforce object-level immutability. Therefore, the Glacier Deep Archive solution with Object Lock and KMS encryption is the only option that addresses all security, cost, and retrieval requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon S3 Glacier Deep Archive, and why is it suitable for long-term storage?
What is S3 Object Lock in compliance mode, and how does it enforce immutability?
How does server-side encryption with AWS KMS-managed keys protect data at rest?
Your organization is containerizing a payroll application and deploying it to Amazon ECS through a CI/CD workflow that uses CodeCommit for source control, CodeBuild for builds, and CodeDeploy for releases. As the SSCP responsible for secure development practices, which action should you recommend to embed security early in the lifecycle, minimize cost, and prevent vulnerable code from ever reaching any runtime environment?
Add an automated SAST job to the CodeBuild stage that scans every pull request before it is merged.
Enable AWS WAF with managed rule groups on the production Application Load Balancer after the first release.
Hire an external firm to perform authenticated penetration tests against production on a quarterly schedule.
Require the security team to conduct manual code reviews only after the application is deployed to the staging environment.
Answer Description
Integrating an automated static application security testing (SAST) stage in the CodeBuild phase implements the principle of "shift-left" security central to DevSecOps. Because the scan runs on every pull request, defects are identified before code is merged or deployed, when they are cheapest and easiest to fix. Quarterly penetration tests may uncover issues, but only long after vulnerable code is in production and at a higher cost. Manual reviews after deployment detect problems late, rely on human availability, and do not scale. A web application firewall protects running workloads but does nothing to stop insecure code from being introduced during development.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is SAST and why is it important in the CI/CD pipeline?
What does the 'shift-left' principle mean in DevSecOps?
How does AWS CodeBuild integrate security scanning tools like SAST?
A security team must keep Apache access logs for 13 months to satisfy an audit requirement. Logs are written every minute to an Amazon S3 bucket in eu-west-1. The solution must minimize storage costs and guarantee that no user or process can delete or overwrite the logs during the retention period. Which approach best meets these goals?
Copy the log files to Amazon EFS and enable EFS Infrequent Access with lifecycle management set to 400 days.
Use AWS Backup with a 400-day backup plan that protects the S3 bucket and stores the backups in S3 Glacier Deep Archive.
Enable S3 Object Lock in Compliance mode with a 400-day retention period and add a lifecycle rule that transitions objects to S3 Glacier Instant Retrieval after 30 days.
Enable S3 Versioning and add a lifecycle rule that moves noncurrent object versions to S3 Glacier Flexible Retrieval after 30 days and permanently deletes objects after 400 days.
Answer Description
Using Amazon S3 Object Lock in Compliance mode places every log object in a write-once, read-many (WORM) state; no one, including the root user, can delete or alter the object until the retention date passes. A lifecycle rule can still move the objects to a colder, less expensive storage class such as S3 Glacier Instant Retrieval after 30 days, reducing cost while leaving the WORM protection in place for the full 400-day period. Versioning alone cannot stop a privileged user from permanently deleting all versions, AWS Backup for S3 does not protect the original objects from deletion, and moving the data to EFS introduces higher cost and lacks WORM protection. Therefore, enabling Object Lock in Compliance mode with an appropriate retention period and adding a transition rule provides both immutability and cost efficiency.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is S3 Object Lock and how does Compliance mode work?
What are the benefits of transitioning data to S3 Glacier, and what is S3 Glacier Instant Retrieval?
Why is S3 Versioning insufficient protection for preventing deletion or modification of logs?
A fintech startup is planning an Amazon S3 Glacier vault to archive customer tax records for seven years. Regulations emphasize preventing any unauthorized disclosure of the records, while occasional bit-level corruption is acceptable and retrieval delays of several hours are permissible. Which security objective is the primary driver for the team's storage design decisions?
Availability
Confidentiality
Non-repudiation
Integrity
Answer Description
The requirement focuses on preventing unauthorized disclosure of customer tax records, which maps directly to the security objective of confidentiality. Integrity addresses protection against unauthorized modification, availability concerns timely and reliable access, and non-repudiation provides proof of origin or delivery. Because the scenario tolerates some corruption and slow retrieval but does not tolerate exposure to unauthorized parties, confidentiality is the key objective.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does confidentiality mean in the context of data security?
What is Amazon S3 Glacier, and how does it support confidentiality?
How does encryption prevent unauthorized disclosure of sensitive data?
Woo!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.