ISC2 Systems Security Certified Practitioner (SSCP) Practice Test
Use the form below to configure your ISC2 Systems Security Certified Practitioner (SSCP) Practice Test. The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

ISC2 Systems Security Certified Practitioner (SSCP) Information
About the SSCP
The Systems Security Certified Practitioner (SSCP) credential from ISC2 is aimed at hands-on IT and security professionals—systems administrators, network engineers, analysts and similar roles—who want vendor-neutral proof that they can implement, monitor and secure enterprise infrastructure. To sit for the exam you need just one year of cumulative, paid work in any of the seven SSCP domains, or you can earn “Associate of ISC2” status and finish the experience requirement within two years. This low-friction entry point, plus ANSI/ISO 17024 accreditation and U.S. DoD 8140.03 approval, makes the SSCP an attractive stepping-stone toward more senior certs such as the CISSP.
What’s Inside the Latest SSCP Exam
After a 2024 job-task analysis, ISC2 moved the SSCP to Computerized Adaptive Testing on October 1 2025. The new format dynamically selects 100-125 multiple-choice or advanced items and gives you two hours to reach a 700/1000 cut score.
The seven domains are:
- Security Concepts & Practices (16 %)
- Access Controls (15 %)
- Risk Identification, Monitoring & Analysis (15 %)
- Incident Response & Recovery (14 %)
- Cryptography (9 %)
- Network & Communications Security (16 %)
- Systems & Application Security (15 %)
Adaptive delivery tightens exam security, shortens seat time and focuses questions on your demonstrated ability.
SSCP Practice Exams
Working through full-length practice tests is one of the most effective ways to convert study hours into passing scores. Timed drills condition you to manage a two-hour adaptive session, while score reports reveal domain-level gaps you can attack with flash cards or lab work. ISC2’s own self-paced training now bundles “practical assessments” that mirror live-exam item types; third-party banks from publishers such as Pearson or Skillsoft add even more question variety. Candidates who cycle through several mocks consistently report higher confidence, steadier pacing and fewer surprises on test day.
Exam Preparation Tips
Plan on at least six weeks of structured study: review the official exam outline, lab the high-weight domains (especially access control and network security), and join an online study group for peer explanations. On exam day, remember that CAT will stop early if it is statistically sure of your pass/fail status—so stay calm if the question count feels short. Above all, keep learning light but continuous; as recent SSCP holders note, “be calm and patient…connect with those who have passed to motivate yourself and learn from their experiences.”

Free ISC2 Systems Security Certified Practitioner (SSCP) Practice Test
- 20 Questions
- Unlimited time
- Security Concepts and PracticesAccess ControlsRisk Identification, Monitoring and AnalysisIncident Response and RecoveryCryptographyNetwork and Communication SecuritySystems and Application Security
A financial services company uses an 802.1X-based NAC solution to verify that laptops have up-to-date antivirus signatures before they receive a production VLAN address. Auditors now insist that the NAC must also detect when a laptop becomes non-compliant during the workday and automatically move it to a quarantine network without user intervention. Which NAC feature meets this new requirement?
Link-layer encryption using 802.1AE (MACsec) on all access links
One-time certificate authentication performed only during pre-admission
Periodic post-admission posture assessment with dynamic VLAN re-assignment
Port security that restricts the number of MAC addresses allowed on each switch port
Answer Description
The requirement describes a control that is applied after the endpoint has already been admitted to the network; it must continue to monitor the device's health and, if the posture changes, dynamically enforce a different policy (for example by moving the host to a remediation or quarantine VLAN). This is a classic post-admission control implemented through periodic or continuous posture assessment combined with dynamic VLAN assignment. One-time pre-admission checks, port-based MAC limits, or link-layer encryption all occur either before admission or focus on different security objectives, and they do not provide ongoing health verification or automated quarantine.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a post-admission posture assessment in NAC?
What is dynamic VLAN assignment, and how does it work in NAC?
How does 802.1X-based NAC differ from other NAC mechanisms?
A security engineer is configuring network device administration for a fleet of edge routers that must support multifactor authentication, command-by-command authorization, and granular accounting logs. The devices already speak AAA protocols and must keep user passwords encrypted end-to-end across the corporate MPLS WAN. Which access control solution best satisfies all of these requirements?
Integrate the routers with the existing RADIUS server using EAP-TLS for multifactor authentication.
Configure Kerberos authentication on the routers and forward audit logs to a SIEM.
Deploy a TACACS+ server cluster and point the routers' AAA settings to it.
Use LDAP over TLS directly on the routers and log commands locally to syslog.
Answer Description
TACACS+ encrypts the entire payload of the authentication packet, supports per-command authorization decisions, and records detailed accounting information, meeting the engineer's encryption, authorization, and logging needs. RADIUS only encrypts the user's password, cannot perform command-by-command authorization, and provides less granular accounting, so it falls short of the stated requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is TACACS+ and how does it work?
How does TACACS+ compare to RADIUS?
Why is encryption important in AAA protocols, and how does TACACS+ ensure safe password handling?
Your organization has established an IPsec site-to-site VPN between its on-premises firewall and an AWS virtual private gateway. During performance testing, large file transfers (packets over about 1400 bytes) consistently fail, while small pings succeed. Packet captures show repeated ICMP "fragmentation needed" messages and no ESP packets larger than 1420 bytes. Which common IPsec deployment issue is most likely responsible for this behavior?
The VPN is using transport mode rather than tunnel mode, so exposed inner headers are being filtered by intermediate routers.
ESP overhead causes packets to exceed the path MTU, and with the DF bit set they cannot be fragmented, so large packets are dropped.
Perfect Forward Secrecy (PFS) is disabled, so the reuse of keying material triggers replay protection and discards large packets.
Phase 1 is configured for aggressive mode instead of main mode, leading to periodic re-authentication and packet loss.
Answer Description
Encapsulating an IP packet with IPsec ESP adds 20-70 bytes of header and trailer information. If the original packet is already near the path MTU (often 1500 bytes on Ethernet), the extra overhead pushes the frame size above the MTU. Because the original packet usually carries the Don't Fragment (DF) bit, intermediate routers cannot fragment the now-larger packet. They instead send ICMP Type 3 Code 4 ("fragmentation needed") messages, which many firewalls or hosts ignore. The result is that large packets are repeatedly dropped, while smaller ones pass. Lowering the MSS or enabling proper Path-MTU discovery on the VPN devices addresses the problem. Phase-1 mode selection, PFS settings, and tunnel-versus-transport mode choices do not cause size-related drops; they affect security strength or header exposure but not fragmentation behavior.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Path MTU, and why is it important in IPsec VPNs?
What does the 'Don't Fragment' (DF) bit do in an IP packet?
How can lowering the MSS help with IPsec VPN packet fragmentation issues?
A German SaaS provider plans to migrate its customer relationship database, which contains EU residents' personal data, to Amazon S3 and Amazon RDS. To satisfy GDPR requirements for data locality and the right to erasure while keeping operational overhead low, which approach BEST meets the company's obligations?
Host the workloads in AWS GovCloud (US), encrypt data with customer-managed keys located in the United States, and rely on the EU-US Privacy Shield framework for lawful transfer.
Use S3 buckets only in eu-central-1 with server-side encryption (SSE-S3) and place all objects under S3 Object Lock in Compliance mode to address the right to be forgotten.
Keep all S3 buckets and RDS instances in eu-central-1 or eu-west-1, encrypt the data with customer-managed AWS KMS keys that never leave those Regions, and rely on the GDPR Data Processing Addendum already incorporated into the AWS Service Terms.
Store the data in any convenient AWS Region and enable cross-Region replication to an EU Region, assuming AWS will act as the data controller under GDPR.
Answer Description
The company remains the data controller under GDPR and must keep the personal data inside the European Economic Area unless additional transfer safeguards are in place. Using only EU Regions such as eu-central-1 or eu-west-1 meets data-residency expectations. Encrypting the data with customer-managed AWS KMS keys that are also restricted to those Regions safeguards confidentiality and allows the controller to delete the keys later, which renders the data permanently unreadable and can fulfil the GDPR right to erasure. AWS acts as the data processor, and its GDPR Data Processing Addendum is automatically part of the AWS Service Terms, so no separate signature is required (although the customer can request a signed copy). The other options either move data outside the EU, rely on an invalid transfer mechanism, prevent deletion with Object Lock Compliance mode, or incorrectly shift the controller role to AWS, so they fail to meet GDPR obligations.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the role of a data controller and data processor under GDPR?
What are AWS KMS keys, and how do they help with GDPR compliance?
What does the AWS GDPR Data Processing Addendum include, and why is it important?
An enterprise is updating its business continuity plan for a potential influenza pandemic that could sideline up to 40 percent of employees for several weeks. The primary data center, power, and WAN links are expected to remain fully operational. Which measure should receive top priority in the pandemic response plan to ensure delivery of critical IT services during the outbreak?
Contract a geographically distant hot site and prepare automation scripts to fail over the entire data center.
Install additional diesel generators to guarantee uninterrupted power at the primary facility.
Increase the frequency of off-site backups to nightly to minimize potential data loss.
Cross-train personnel so that backup staff can perform all mission-critical IT operations if primary staff are absent.
Answer Description
Unlike natural disasters that threaten facilities or infrastructure, a pandemic principally affects the availability of people. Best-practice guidance from NIST and DHS recommends identifying minimum staffing levels and ensuring that multiple, qualified individuals can perform each mission-critical task. Cross-training staff (and documenting procedures) mitigates the risk that illness will remove all individuals who know how to run or restore key systems. Arranging a hot site, installing generators, or changing backup cycles help with physical or data loss scenarios but do little to address the primary pandemic risk-human resource shortages.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is cross-training personnel prioritized for a pandemic scenario?
What is the difference between cross-training and using a hot site during a pandemic?
What are NIST and DHS guidelines regarding pandemic preparedness in IT operations?
Your company hosts development and production microservices on Amazon EC2 in the same /16 VPC subnet that shares security groups, letting developers reach production databases. You need strong logical isolation between the environments while still allowing limited CI/CD ports from development into production, with minimal cost and administration. Which approach best meets these requirements?
Create separate VPCs for development and production, connect them with a VPC peering connection, and use route tables and security groups to allow only the required CI/CD ports.
Keep all instances in the current subnet but assign distinct security groups to dev and prod and deny all inter-group traffic except the CI/CD ports.
Keep both environments in the same subnet but deploy AWS Network Firewall between them to filter all traffic except the CI/CD ports.
Move development instances to a new subnet within the existing VPC and attach a dedicated network ACL that blocks all traffic except the CI/CD ports.
Answer Description
Creating separate VPCs for development and production gives each environment an independent, non-overlapping IP space and its own routing and security boundaries, delivering the strongest form of logical segmentation at Layer 3. A VPC itself is free, and VPC peering has no hourly cost; you only pay for data that crosses the link. By attaching restrictive route-table entries and security-group rules to the peering connection, you can allow just the necessary CI/CD ports while denying all other traffic.
Placing dev and prod in different subnets with distinct network ACLs still leaves them in the same VPC, so misconfigurations (for example, overly permissive route tables) could expose prod resources, and stateless ACLs require duplicate in/out rules, increasing operational effort. Simply changing security groups keeps everything in one subnet, providing the weakest separation and a larger blast radius. Deploying AWS Network Firewall would achieve segmentation but adds additional service cost and management overhead that the scenario seeks to avoid.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a VPC and why is it important in this scenario?
Why are security groups and network ACLs not sufficient to isolate environments?
What is Layer 3 segmentation and why is it preferred for network isolation?
Your company must archive 500 TB of research data for at least 10 years to satisfy regulatory requirements. The data is almost never accessed, yet auditors occasionally demand a small subset and expect it to be available within 15 minutes. Management wants the lowest possible storage cost and prefers a fully managed AWS solution with no tape infrastructure to maintain. Which Amazon S3 storage option best meets these needs?
Enable the Glacier Instant Retrieval tier in S3 Intelligent-Tiering for immediate access to archived objects.
Store the objects in Amazon S3 Glacier Flexible Retrieval and use Expedited retrievals when auditors request data.
Keep the data in Amazon S3 Standard-Infrequent Access to guarantee rapid access without retrieval fees.
Place the data in Amazon S3 Glacier Deep Archive to minimize storage cost.
Answer Description
Amazon S3 Glacier Flexible Retrieval delivers the lowest $/GB cost of the Glacier classes other than Deep Archive, and it supports Expedited retrievals that return individual objects in 1-5 minutes-well under the 15-minute service-level goal. S3 Glacier Deep Archive is cheaper per-GB but only offers bulk and standard retrievals that take hours, so it fails the time requirement. S3 Standard-IA and S3 Glacier Instant Retrieval provide much faster access, but both cost significantly more per-GB than Glacier Flexible Retrieval. Therefore, storing the data directly in S3 Glacier Flexible Retrieval and using Expedited retrievals when audits occur is the most cost-effective solution that satisfies the retrieval-time constraint without requiring on-premises tape management.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Amazon S3 Glacier Flexible Retrieval?
How does Expedited retrieval work in Glacier Flexible Retrieval?
Why is S3 Glacier Deep Archive not suitable for this use case?
During a post-incident investigation, a security analyst reviews CloudTrail logs, EBS snapshots, and network packet captures related to a suspected data exfiltration from an Amazon S3 bucket. She must deliver a written forensic report to executive management and outside counsel. According to accepted digital forensics practice for presenting objective findings, which approach best ensures the report's conclusions remain defensible and free of bias?
Remove most technical terminology so non-technical stakeholders can easily read the document, even if some precision is lost.
State that the activity was performed by the primary suspect because their IAM user appeared most frequently in the logs.
Cite each observation with its corresponding log entry, timestamp, and hash value, and avoid including unverified opinions or speculation.
Begin the report with the analyst's expert opinions and recommended countermeasures, followed by supporting evidence in an appendix.
Answer Description
Forensic reporting standards such as NIST SP 800-86 and ISO/IEC 27037 stress that conclusions must be based solely on verifiable facts. The analyst should reference specific evidence (log entries, timestamps, hash values) and clearly separate these factual observations from any interpretations. Attributing intent without corroborating proof, front-loading opinions, or oversimplifying by stripping necessary technical detail may introduce bias or reduce accuracy, undermining the report's objectivity and legal defensibility.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is CloudTrail and how is it used in digital forensics?
What role do EBS snapshots play in forensic investigations?
Why is bias avoidance important in forensic reporting?
A healthcare provider is deploying a serverless application on AWS that receives patients' vital-sign data from mobile devices, stores the records in Amazon S3, and invokes AWS Lambda functions for analytics. To comply with HIPAA, the team must minimize exposure of raw PHI, ensure encryption in transit and at rest, and use keys that rotate automatically. Which approach best meets these requirements?
Encrypt data in the mobile app with a hard-coded AES key before upload, disable encryption in Amazon S3, and transmit over HTTPS.
Send data over an IPsec VPN without TLS, store records in Amazon S3 Glacier Deep Archive without encryption, and restrict access using bucket policies only.
Require TLS 1.2 for all API requests, configure Amazon S3 server-side encryption with S3-managed AES-256 keys (SSE-S3), and rely on Amazon's default key rotation.
Enable mutual TLS on Amazon API Gateway, accept only client-authenticated sessions, decrypt the payload in AWS Lambda, then store it in Amazon S3 encrypted with a customer-managed AWS KMS key that has automatic rotation enabled.
Answer Description
Mutual TLS at Amazon API Gateway authenticates each calling device and encrypts the session, providing strong protection for PHI in transit. Decrypting the payload only inside a private Lambda function and immediately re-encrypting it with a customer-managed AWS KMS key (SSE-KMS) limits plaintext exposure to trusted code while giving the organization control over the key, detailed CloudTrail logging, and the ability to enable automatic annual key rotation. The other options fail to meet one or more requirements: relying on SSE-S3 does not give customer key control, hard-coding client-side keys impedes secure rotation, and using an IPsec VPN without TLS or encryption at rest would violate HIPAA safeguards.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Lambda and how does it enhance serverless applications?
How does AWS KMS key rotation ensure data security?
Why is mutual TLS important for protecting PHI in transit?
A security administrator needs to create a firewall rule to permit internal application servers to deliver email directly to external mail exchangers on the Internet. Which TCP destination port should be opened to allow this Simple Mail Transfer Protocol (SMTP) traffic?
TCP port 110
TCP port 25
TCP port 53
TCP port 143
Answer Description
SMTP uses TCP port 25 for transferring email between mail transfer agents on the Internet. Opening this port in the outbound firewall rule allows the organization's application servers to establish SMTP sessions with external mail servers. POP3 (port 110) and IMAP (port 143) are used by clients retrieving mail, not by servers sending it. DNS typically uses port 53 and is unrelated to SMTP message delivery.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why does SMTP use TCP port 25?
What is the purpose of POP3 and IMAP in email communication?
Why is DNS not involved in SMTP message delivery?
Your company's disaster‐recovery team is informed that a strong earthquake has made the primary data center structurally unsafe and power is expected to be out for several days. A formal disaster declaration has been issued. According to business-continuity best practices for natural-disaster response, which action should be taken first to keep mission-critical applications available?
Wait for government structural engineers to declare the primary facility safe before executing any recovery steps.
Initiate failover to the company's fully operational hot site located in a different seismic zone.
Begin a full data restore from the most recent off-site tape backup to replacement hardware at the damaged facility.
Suspend all outbound internet traffic from the primary site to prevent possible data exfiltration during the outage.
Answer Description
Once management declares a disaster, the business-continuity plan moves from preparation into the activation and relocation phase. The highest priority is to restore availability of critical services by transferring processing to the predefined alternate site that is already equipped and ready-typically a hot site in another geographic region. Beginning restorations at the damaged facility delays recovery and may be impossible while utilities are offline. Halting network connectivity or waiting for structural clearance does nothing to maintain service availability and therefore are not appropriate first steps under the BCP.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a hot site in the context of disaster recovery?
Why is initiating failover to a hot site prioritized over restoring data at the damaged facility?
What is the role of a business continuity plan (BCP) during a disaster declaration?
Your organization's policy mandates that all payroll data be encrypted at rest. Unfortunately, the legacy UNIX server that hosts the payroll database cannot support any modern filesystem or database-level encryption, and a platform upgrade is at least six months away. As the security practitioner, which action represents the most appropriate compensating control to meet the encryption-at-rest requirement while the legacy system remains in service?
Integrate an approved cryptographic library into the payroll application to encrypt sensitive records before they are written to disk.
Place the legacy payroll server in an isolated VLAN protected by an additional firewall that only allows traffic from HR workstations.
Schedule nightly full backups of the payroll server to encrypted tapes that are stored in an off-site vault.
Increase password complexity requirements and enforce a 90-day rotation policy for all payroll system user accounts.
Answer Description
A compensating control must deliver security that is equivalent to, or stronger than, the original requirement when the prescribed control cannot be implemented. Because the legacy operating system cannot perform native filesystem or transparent database encryption, the most effective alternative is to modify the application so it encrypts sensitive payroll fields before they are written to disk. This ensures the data is actually stored in ciphertext, satisfying the policy's encryption-at-rest mandate. Simply restricting network access, enforcing stronger passwords, or backing up to encrypted media may reduce other risks, but none of those options guarantee that the data residing on the legacy server itself is encrypted, so they do not fully meet the stated requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a cryptographic library?
Why is encryption at rest critical for sensitive data?
What does a compensating control mean in security?
An organization is deploying a new WPA2-Enterprise Wi-Fi network that must provide the strongest possible mutual authentication while preventing offline password-cracking attacks. All corporate laptops can be provisioned with individual user and device certificates issued by the firm's internal PKI. Which Extensible Authentication Protocol (EAP) method should the security administrator configure on the RADIUS server to best satisfy these requirements?
EAP-TLS
PEAP (EAP tunneled with TLS protecting MSCHAPv2)
EAP-FAST with Protected Access Credentials (PAC)
EAP-MD5 challenge
Answer Description
EAP-TLS uses X.509 certificates on both the supplicant and the authentication server, creating a mutually authenticated TLS tunnel before any user credentials are exchanged. Because authentication relies on asymmetric keys stored in the certificates, no password-based challenge is exposed, eliminating the possibility of offline password-cracking attacks. PEAP and EAP-FAST rely on tunneled password methods and only require a server certificate, providing less assurance. EAP-MD5 offers only one-way authentication and transmits a hash that is vulnerable to dictionary attacks, making it unsuitable for secure wireless deployments.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is EAP-TLS and how does it work?
What is a PKI, and how does it relate to EAP-TLS?
Why is EAP-TLS more secure than other EAP methods like PEAP or EAP-FAST?
A company runs a payroll web app on an on-prem Linux host listening on TCP 8443; the same server also hosts a public site on TCP 80. Policy allows only the HR subnet 192.168.10.0/24 to reach payroll, while any internal subnet may view the public site. No extra hardware or third-party software may be added. Which method best enforces this policy with least privilege?
Require users to authenticate with client certificates when accessing the payroll URL over HTTPS on port 8443.
Set discretionary file permissions so only HR group members can read payroll files while leaving all network ports open.
Place the HR subnet in its own VLAN and configure inter-VLAN routing to block other subnets from reaching TCP 8443 on the server.
Create host-based firewall ACL rules that allow TCP 8443 only from 192.168.10.0/24, allow TCP 80 from all internal networks, and drop all other inbound traffic.
Answer Description
An Access Control List (ACL) applied through the native host-based firewall (such as iptables or nftables on Linux) allows restriction of inbound traffic at the network layer. By explicitly permitting TCP 8443 only from 192.168.10.0/24, permitting TCP 80 from all internal networks, and dropping all other unsolicited traffic, the server enforces the required segmentation while honoring least privilege. VLAN re-architecture or hardware firewalls add components the constraint disallows. Client certificates or file permissions control authentication or data access after the connection but do not block unwanted network connections, so they do not satisfy the policy.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a host-based firewall in Linux, and how does it enforce ACL rules?
What is TCP 8443 used for, and why does this port need to be restricted to the HR subnet?
What are the differences between iptables and nftables for creating ACL rules?
A healthcare company is shortlisting a third-party SaaS provider that runs entirely on AWS. Before signing the service-level agreement, the organization's compliance team must independently retrieve AWS's latest SOC 2 Type II report and ISO 27001 certificate to confirm that the cloud infrastructure satisfies regulatory auditing requirements. Which AWS service or feature most efficiently provides auditors with self-service access to these third-party assessment reports?
Use AWS Artifact to download the required SOC 2 and ISO 27001 reports.
Activate AWS Security Hub to provide centralized compliance findings.
Enable AWS CloudTrail and share the account's event logs with the auditors.
Configure AWS Config rules to generate a compliance summary for the auditors.
Answer Description
AWS Artifact is the AWS self-service portal for on-demand access to AWS compliance documentation, including SOC 1/SOC 2 reports, ISO 27001 certifications, and other third-party audit attestations. By downloading the reports directly from AWS Artifact, the company's auditors can verify that the cloud provider's controls meet regulatory standards without relying on the SaaS vendor.
- AWS CloudTrail supplies account-level API activity logs, not independent compliance attestations.
- AWS Config records and evaluates resource configurations but does not provide external audit reports.
- AWS Security Hub aggregates security findings from multiple sources but does not distribute AWS's formal compliance certifications. Therefore, AWS Artifact is the appropriate choice for obtaining third-party audit documentation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is AWS Artifact?
How does SOC 2 Type II differ from SOC 1 or SOC 2 Type I?
What does ISO 27001 certification signify?
A healthcare firm runs a legacy clinical application on Amazon EC2 that only supports TLS 1.0. Corporate policy mandates that all external connections use TLS 1.2 or newer. Because the vendor patch will not arrive before the upcoming compliance audit, the security engineer must implement a compensating control. Which solution best meets the requirement while allowing the application to remain unchanged?
Enable EBS encryption on the instance's volumes and rotate the KMS key monthly to satisfy encryption requirements.
Apply an IAM policy that blocks the legacy instance from initiating outbound network connections except to its database.
Deploy AWS Network Firewall ahead of the instance and create a rule that drops any packets not using TLS 1.2.
Place an Application Load Balancer in front of the instance, enforce a TLS 1.2-only security policy on the listener, and re-encrypt traffic to the backend with TLS 1.0.
Answer Description
A compensating control provides an alternative safeguard when the preferred control (upgrading the application to support TLS 1.2) is not immediately feasible. Terminating incoming sessions on an Application Load Balancer that is configured with a TLS 1.2-only security policy ensures every client negotiates the required protocol. The load balancer can then re-encrypt traffic to the backend with TLS 1.0, allowing the legacy application to operate unchanged while satisfying the policy for external connections.
AWS Network Firewall cannot reliably drop traffic based on the negotiated TLS version because the TLS handshake is encrypted after the ClientHello, and version filtering is not a supported feature. Encrypting EBS volumes protects data at rest and does nothing for in-transit requirements. Restricting the instance's outbound access addresses egress control, not the mandated minimum TLS version for inbound client sessions. Therefore, using an Application Load Balancer for TLS 1.2 termination is the appropriate compensating control.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is TLS and why is it important?
How does an Application Load Balancer (ALB) enforce TLS policies?
What are compensating controls in cybersecurity?
An e-commerce company is building a small on-premises edge facility to hold servers that replicate critical data from its AWS environment. Compliance policy requires that the server cage must block tailgating so that only one authenticated person can enter or leave at a time, with credentials validated at both doors. Which physical security control best satisfies this requirement?
Mount a biometric time clock at the main entrance to record when staff arrive and depart.
Require all visitors to sign a physical logbook and wear color-coded visitor badges.
Install a two-door mantrap with access readers controlling each doorway.
Deploy motion-activated CCTV cameras covering the server cage interior and entrances.
Answer Description
A mantrap is a small vestibule with two interlocking doors. Each door is controlled by an access mechanism (such as a badge reader or biometric scanner), and only one door can be unlocked at any moment. An individual must authenticate to open the first door, step inside, let it close, then authenticate again to open the second door. Because the space holds only one person at a time and the second door will not open until the first is secured, mantraps are specifically designed to prevent piggybacking or tailgating into sensitive areas.
Closed-circuit television (CCTV) provides monitoring but does not physically stop unauthorized entry. A biometric time clock records attendance but does not control access or prevent multiple people from entering together. Visitor logbooks and badges establish accountability but rely on human enforcement and do not enforce single-person entry. Therefore, the mantrap is the only option that directly meets the stated requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does a two-door mantrap work in physical security?
What is the difference between tailgating and piggybacking?
Why doesn’t CCTV monitoring prevent tailgating or piggybacking?
A defense contractor must standardize permissions for project files on multiple Windows and Linux servers. Access must follow data classification labels (Confidential, Secret, Top-Secret) tied to personnel clearances. Local administrators cannot create exceptions, and permissions must persist when a file is copied within the domain. Which access control approach BEST meets these needs?
Apply Role-Based Access Control by assigning project roles and linking them to shared folder permissions.
Implement a centrally managed Mandatory Access Control system that assigns fixed classification labels to files and clearances to users.
Deploy an Attribute-Based Access Control solution that evaluates user claims and file tags at run time.
Use Discretionary Access Control with inherited Access Control Lists that mirror the classification hierarchy.
Answer Description
Mandatory Access Control (MAC) enforces system-wide policies defined and maintained centrally by a security authority, not by resource owners or local administrators. Objects receive fixed classification labels and subjects receive matching clearance labels; the operating system evaluates every request against these labels, so the rules remain intact even when data is moved.
Discretionary Access Control allows owners or administrators to change ACLs, violating the "no exceptions" requirement.
Role-Based Access Control ties permissions to job roles, but administrators can still alter role memberships or object permissions.
Attribute-Based Access Control is flexible but, like RBAC, relies on modifiable policies and is not designed specifically to enforce unchangeable classification labels.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Mandatory Access Control (MAC)?
How does MAC differ from Discretionary Access Control (DAC)?
Why doesn’t Role-Based Access Control (RBAC) work for enforcing fixed data classifications?
An enterprise wants to stop staff from uploading credit-card data to unauthorized cloud storage over HTTPS. The network already has a TLS-terminating proxy, a next-gen firewall, and a SPAN feed to a passive IDS. When adding a network-based DLP that must block violations in real time, which requirement is MOST critical for accurate detection and prevention?
Integrate the DLP with the organization's directory service to apply user-based policies before any decryption is performed.
Ensure outbound TLS sessions are decrypted by an inline proxy or firewall and the clear-text traffic is passed to the DLP engine for inspection.
Feed NetFlow or IPFIX records from edge routers into the DLP so it can identify large data transfers in near real time.
Attach the DLP sensor to the existing SPAN port so it can analyze mirrored (but still encrypted) traffic without affecting latency.
Answer Description
Network-based DLP appliances identify sensitive data by examining packet payloads. When traffic is protected by TLS, those payloads are encrypted and unreadable to the DLP unless decryption occurs before inspection. Forwarding clear-text streams from an inline SSL/TLS-terminating device (such as a proxy or firewall performing SSL inspection) lets the DLP apply its content-matching policies and, if necessary, actively block the session. Merely mirroring encrypted traffic to a passive sensor, relying on flow records, or integrating with directory services does not provide the content visibility required to discover payment data in motion or to prevent its egress.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is TLS decryption necessary for a network-based DLP to inspect data?
What is the role of an SSL/TLS-terminating device in a security network?
Why are alternatives like SPAN or NetFlow insufficient for real-time DLP enforcement?
Your organization hires independent software testers for a 3-month project. They must compile proprietary code stored in the internal network. Security policy states no source code may reside on non-corporate endpoints. You must provide quick remote access with minimal client footprint while limiting exposure if laptops are infected. Which approach best meets these requirements?
Establish site-to-site VPNs to each contractor's home network and restrict access with firewall rules
Install a full-tunnel client-based IPSec VPN that places contractors on the development VLAN
Provision a cloud-hosted virtual desktop infrastructure (VDI) accessible through an HTML5 browser
Configure a clientless SSL VPN portal that allows file transfer to local drives but blocks other ports
Answer Description
A browser-based virtual desktop infrastructure (VDI) session renders the desktop inside the data center and transmits only display information to the contractor. Because files never leave the controlled environment, source code cannot be stored on personal devices, and malware on those devices is isolated from the production network. VDI pools can be rapidly provisioned and de-provisioned, meeting the short engagement timeline. In contrast, any form of VPN (client-based, clientless, or site-to-site) extends the corporate network to unmanaged endpoints or allows file downloads, increasing data-loss risk and complicating onboarding and off-boarding.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Virtual Desktop Infrastructure (VDI)?
How does browser-based VDI enhance security compared to VPN solutions?
What are potential risks of using VPNs for remote contractor access?
Neat!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.