ISC2 Certified Cloud Security Professional (CCSP) Practice Test
Use the form below to configure your ISC2 Certified Cloud Security Professional (CCSP) Practice Test. The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

ISC2 Certified Cloud Security Professional (CCSP) Information
About the ISC2 Certified Cloud Security Professional (CCSP) Exam
The Certified Cloud Security Professional (CCSP) certification from ISC2 is a globally recognized credential that validates an individual's advanced technical skills and knowledge to design, manage, and secure data, applications, and infrastructure in the cloud. Earning the CCSP demonstrates a professional's expertise in cloud security architecture, design, operations, and service orchestration. The latest version of the CCSP exam, updated in August 2024, consists of 125 multiple-choice questions that candidates have three hours to complete. To pass, a candidate must score at least 700 out of 1000 points. The exam questions are designed to be scenario-based, assessing a practitioner's ability to apply their knowledge in real-world situations.
Core Domains of the CCSP Exam
The CCSP exam is structured around six core domains, each with a specific weighting. These domains encompass the full spectrum of cloud security. The domains and their respective weights are: Cloud Concepts, Architecture and Design (17%), Cloud Data Security (20%), Cloud Platform & Infrastructure Security (17%), Cloud Application Security (17%), Cloud Security Operations (16%), and Legal, Risk and Compliance (13%). To be eligible for the exam, candidates generally need a minimum of five years of cumulative, full-time experience in Information Technology. This must include three years in cybersecurity and one year in one or more of the six CCSP domains.
The Value of Practice Exams in Preparation
Thorough preparation is key to success on the CCSP exam, and taking practice exams is a highly effective strategy. Practice tests help candidates to assess their knowledge, identify areas of weakness across the six domains, and become familiar with the question format and exam structure. By simulating the actual exam environment, practice questions also allow candidates to improve their time management skills and build confidence. Regularly reviewing mistakes made on practice tests provides an opportunity to revisit and reinforce challenging concepts, personalizing the study strategy for a more efficient and effective preparation process.

Free ISC2 Certified Cloud Security Professional (CCSP) Practice Test
- 20 Questions
- Unlimited time
- Cloud Concepts, Architecture and DesignCloud Data SecurityCloud Platform & Infrastructure SecurityCloud Application SecurityCloud Security OperationsLegal, Risk and Compliance
Your organization runs a hybrid IaaS platform and has adopted an Ansible-based pipeline to provision and update virtual machines, containers, and supporting network objects. As part of meeting ISO/IEC 20000-1 requirements for configuration management, the cloud operations manager wants the team to be able to show, at any moment, which service assets exist and the exact versions of their configurations. Which control BEST satisfies this requirement?
Populate and maintain a configuration management database (CMDB) automatically from every pipeline run.
Enable object versioning on the cloud storage bucket that keeps deployment scripts.
Perform blue/green deployments for all application releases to avoid in-place changes.
Rely on auto-scaling policies so new instances inherit the latest approved build.
Answer Description
ISO/IEC 20000-1 (and ITIL) call for a controlled repository that records the relationships between service assets and their configuration items. A configuration management database (CMDB) that is continuously updated by the orchestration workflow becomes the single source of truth, allowing operators and auditors to query the current or historical state of any cloud resource and to trace changes. Auto-scaling policies, blue/green deployments, and object versioning provide useful operational capabilities, but none of them on their own maintains the complete, queryable inventory and relationship data that formal configuration management demands.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a CMDB?
How does ISO/IEC 20000-1 relate to configuration management?
What is an Ansible-based pipeline?
A cloud-based CRM platform is being developed for multiple tenants and will be deployed on a managed Kubernetes cluster. During the test phase, the security architect insists on adding abuse case testing to verify that one tenant cannot deliberately exhaust shared resources. Which activity is the best example of an abuse case test in this context?
Launch automated scripts that issue a high volume of API requests to intentionally exceed the tenant's rate limit and monitor throttling and logging behavior.
Run unit tests to confirm each microservice returns correct responses to valid customer data submissions.
Review user stories and acceptance criteria to ensure all approved business features are implemented before release.
Perform static code analysis to detect potential SQL injection flaws in database access modules.
Answer Description
Abuse case testing focuses on how a malicious or careless actor could intentionally misuse the application and the cloud environment. Flooding the public API with scripted calls that exceed the documented rate limits tries to force the application to over-consume backend capacity, potentially starving other tenants. Observing whether the service throttles, generates alerts, and logs the behavior validates protections against this abuse scenario. Unit tests of valid inputs, static code analysis for coding flaws, and reviewing acceptance criteria all contribute to quality or secure development, but they target normal usage or code issues rather than intentional misuse aimed at resource exhaustion.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is abuse case testing?
What are Kubernetes rate limits, and why are they important?
How does API request flooding simulate abuse scenarios?
Your organization is negotiating a cloud outsourcing deal for its customer-facing SaaS platform. Business leaders require at least 99.95 % monthly service availability and want financial credits if the target is missed. From a service level management perspective, which item must be expressly included in the written service level agreement (SLA) to ensure the availability commitment can be enforced and audited over time?
A statement that the provider maintains certification to ISO/IEC 27017 for cloud security controls
A clause mandating that all customer data remains within specified geographic regions
A schedule requiring quarterly external penetration tests of the provider's environment
A precise definition of the measurement window and calculation method used to determine the 99.95 % availability figure
Answer Description
Service level management focuses on defining, negotiating, monitoring, and reporting the level of IT services delivered. For an availability commitment to be meaningful, the SLA must specify exactly how availability is measured-its calculation formula, the length of the measurement period (for example, a calendar month), the clock that is used, what events count as downtime, and any planned-maintenance exclusions. Without that definition, neither party can objectively verify whether 99.95 % was achieved or determine if service credits are due. While security certifications, data-location clauses, and penetration-testing schedules are important, they relate to compliance and security management rather than the core availability metric that service level management tracks and reports.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the importance of specifying the measurement window in an SLA?
Why is a calculation method necessary for enforcing SLA availability targets?
How does service level management differ from compliance and security management?
Why is it important to define the measurement window in an SLA?
How is the availability percentage calculated for an SLA?
What is the role of service-level credits in an SLA?
Your organization is refactoring a monolithic web shop into several microservices that will be deployed as containers on a managed Kubernetes cluster. Security wants the strongest possible runtime isolation so that, even if an attacker obtains root inside one container, they cannot escape to the node's kernel or gain access to other namespaces-without forcing developers to rewrite code. Which approach best meets this requirement?
Apply custom AppArmor profiles to every node to limit the system calls available to containers.
Create default-deny Kubernetes network policies between namespaces and only whitelist required service ports.
Enable Kubernetes PodSecurityPolicies to drop all privileged capabilities and enforce non-root containers.
Run each microservice with a hypervisor-isolated container runtime (e.g., Kata Containers) so every pod executes inside a lightweight VM.
Answer Description
Lightweight hypervisor-based runtimes such as Kata Containers launch each pod inside its own micro-VM. Because the guest kernel is separated from the host kernel by hardware virtualization boundaries, a compromise inside the container does not give the attacker direct access to the host or to containers in other namespaces.
PodSecurityPolicies and AppArmor profiles can remove Linux capabilities and restrict system calls, but the workload still shares the host kernel, so a kernel-level escape remains possible. Kubernetes network policies control East-West traffic but provide no protection against host breakout. Therefore, using a hypervisor-isolated container runtime offers the strongest containment with minimal application changes.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Kata Containers and how do they provide isolation?
How does hardware virtualization protect the kernel in the context of Kubernetes?
What are the limitations of PodSecurityPolicies and AppArmor profiles compared to hypervisor isolation?
Your organization needs to copy its on-premises customer database that contains names, email addresses, and credit-card PANs into a public-cloud development subscription. Developers require data that looks and behaves like production values for functional testing, but compliance demands that the cloud copy never expose real customer identities and that the transformation be irreversible. Which data-obfuscation approach BEST satisfies these requirements?
Substitute each sensitive field with realistic, format-preserving fictional values through static data masking before export.
Replace sensitive columns with NULL values during extract to the cloud.
Encrypt sensitive fields using format-preserving encryption keys stored on-premises for later decryption.
Apply vault-based tokenization so developers can detokenize data on demand.
Answer Description
Static data masking that applies format-preserving substitution overwrites every sensitive value with a fictitious value that retains the original data type, length, and pattern-for example, replacing a 16-digit credit-card PAN with another valid-looking 16-digit number. The masking engine may use an internal secret or seed to generate the replacement values, but because it keeps no lookup table and the key or seed can be destroyed after masking, the cloud copy cannot be reverted to the original data, satisfying the requirement for irreversibility while still giving developers realistic test data.
Format-preserving encryption, on the other hand, is designed to be decrypted whenever the key is available, and vault-based tokenization always allows detokenization via the vault, so both violate the "never expose" mandate. Simply replacing the columns with NULL removes the realistic patterns that functional tests rely on, making it unsuitable for development use.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is static data masking?
How does format-preserving substitution work in static data masking?
Why are encryption and tokenization unsuitable for irreversible obfuscation?
For a new analytics workload, your organization has migrated several Linux virtual machines to a public IaaS provider. All instances are deployed from the provider's hardened base image and use provider-managed, encrypted block storage. The contract states that the provider secures the physical facilities, hardware, and hypervisor, while the tenant is responsible for everything running inside each instance. Which risk remains primarily with your organization and therefore must be mitigated by your security team?
Unpatched software within the guest operating system enabling remote code execution
Large-scale distributed denial-of-service attacks against the provider's backbone network causing outages
Compromise of the cloud provider's disk encryption service exposing stored data in clear text
Hypervisor escape via side-channel attacks from other tenants on the same physical host
Answer Description
In the IaaS shared-responsibility model, the cloud provider protects the physical datacenter, network, and virtualization layer, including the hypervisor and managed storage services. However, the tenant is accountable for the security of the guest operating system, applications, and data within each instance. Failing to patch or harden the OS leaves software vulnerabilities that attackers can exploit for remote code execution-this risk lies squarely with the customer. Risks such as hypervisor side-channel attacks, failures of the provider's encryption service, or provider-level DDoS events are primarily mitigated by the cloud service provider under its contractual obligations and infrastructure controls, though customers may implement additional safeguards.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the shared-responsibility model in IaaS?
How can unpatched software in the guest operating system lead to remote code execution?
What measures can organizations take to secure guest operating systems in the cloud?
A retailer plans to migrate its card-holder database to an IaaS provider. To avoid rewriting decades-old billing programs that require a 16-digit numeric credit-card field, the architect proposes installing an on-premises tokenization server. The server will replace each primary account number (PAN) with a randomly generated 16-digit numeric token before the record is sent to the cloud, and the token vault will remain in the retailer's data center.
Which statement correctly describes the PCI DSS impact of this design?
Moving the token vault to the cloud would simplify key management and remove the provider's infrastructure from PCI DSS scope.
Tokenization works by applying irreversible salted hashing, so neither the retailer nor the cloud provider can ever map a token back to a PAN.
Tokenization can remove the cardholder data from the cloud storage, so the cloud's PCI DSS scope is greatly reduced, but the cloud must still be evaluated if it can impact the on-premises CDE.
Because the real PAN never leaves the premises, the cloud environment is automatically exempt from any PCI DSS requirements.
Answer Description
Because the cloud stores only random, format-preserving tokens that contain no exploitable PAN information, the cloud portion of the solution no longer handles cardholder data directly, significantly reducing its PCI DSS scope. However, PCI DSS still requires an assessment of any system- including the cloud environment-if it could influence the security of the on-premises cardholder data environment (CDE). The incorrect options either assume scope is eliminated entirely, misunderstand tokenization as hashing, or suggest moving the vault (which would keep the cloud fully in scope).
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is tokenization in the context of PCI DSS?
Why does a cloud provider environment still require PCI DSS assessment in this design?
What is the difference between tokenization and hashing?
Your SaaS development team will store customer personally identifiable information (PII) in a multitenant database hosted on a public IaaS provider. Corporate policy states that cloud-provider personnel must be technically prevented from viewing customer data, while the application itself must retain full read/write capability. Which cryptographic design decision best satisfies this requirement with the least operational complexity?
Apply volume-level encryption on the virtual machine disks using provider-supplied keys
Enable the provider's server-side encryption service with provider-managed keys
Rely on TLS for all database connections and disable at-rest encryption to avoid key-management overhead
Encrypt data on the client before transmission using keys stored in an on-premises Hardware Security Module integrated with a cloud KMS
Answer Description
Encrypting data on the client side with keys that remain under the customer's exclusive control ensures that ciphertext, not plaintext, is delivered to the cloud service. Because the keys are generated and held in an on-premises Hardware Security Module (HSM) that integrates with a cloud key-management service through secure APIs, the cloud provider never gains access to either the plaintext data or the encryption keys, fulfilling the requirement to prevent provider personnel from reading the PII. Server-side or volume-level encryption with provider-managed keys still exposes the keys to the provider's control, and relying solely on TLS secures data in transit but leaves it unprotected at rest. Therefore, client-side encryption with customer-managed keys is the most appropriate choice with minimal additional operational burden beyond key management that the organization already performs on-premises.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an HSM and how does it integrate with a cloud KMS?
What is client-side encryption and how does it protect customer data in the cloud?
How does TLS differ from at-rest encryption, and why does TLS alone not meet the requirement?
Your multinational company is preparing a business-case for migrating critical finance applications to a SaaS provider. Senior leadership has asked the risk team to deliver an assessment that expresses cloud-related threats and loss events in monetary terms so that cost-benefit trade-offs can be clearly understood. Which of the following risk management frameworks best satisfies this requirement?
Factor Analysis of Information Risk (FAIR)
Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE)
ISO/IEC 31000 Enterprise Risk Management Guidelines
COSO Internal Control - Integrated Framework
Answer Description
Factor Analysis of Information Risk (FAIR) was created specifically to quantify information and technology risk in financial terms, providing estimates of probable loss magnitude and frequency. This aligns exactly with management's request for a money-based view of cloud threats.
OCTAVE offers a structured, primarily qualitative approach that focuses on organizational self-assessment rather than detailed financial modeling. ISO/IEC 31000 provides broad principles and a high-level process for risk management but leaves quantification methods to practitioners. The COSO Internal Control - Integrated Framework concentrates on internal control effectiveness and financial reporting assurance, not on calculating expected loss from specific IT threats. Therefore, FAIR is the only framework designed to translate information-risk scenarios into monetary values, making it the most suitable choice.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the FAIR framework and how does it quantify risk?
How does FAIR compare to other frameworks like OCTAVE and ISO 31000?
Why is FAIR particularly suited for cloud-related risk assessments?
Can you explain what the FAIR framework is and why it is suitable for this scenario?
What is the difference between the FAIR framework and OCTAVE?
How does FAIR compare to ISO/IEC 31000 in terms of risk quantification?
An organization is starting a cloud migration project to rebuild its legacy payroll application. The project manager stresses that early identification and mitigation of technical and security risks is the top priority, and she wants a life-cycle that delivers working software in repeated cycles, with each cycle beginning with a formal risk analysis before requirements and design proceed. Which SDLC methodology best satisfies these objectives?
Spiral model
V-model
Agile/Scrum
Waterfall model
Answer Description
The spiral model structures development as a series of iterations (or "spirals"). Each loop begins with objectives and alternatives, followed by a dedicated risk analysis phase. The results of that analysis drive prototyping, design, coding, and testing for that iteration, so high-risk items are addressed early and continuously. Waterfall and the V-model are linear, locking requirements up front and delaying risk discovery. Agile/Scrum is iterative, but it does not mandate formal risk analysis at the start of each cycle. Therefore, the risk-driven spiral model is the most suitable choice.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is the Spiral model considered risk-driven?
How does the Spiral model differ from the Waterfall model?
What types of projects benefit most from the Spiral model?
Your company has outsourced its ERP system to a multi-tenant SaaS provider. The provider mails you a current SOC 1 Type II report that details controls over its datacenter physical security, hypervisor management, and perimeter firewalls. The internal audit team must still gather evidence for the upcoming SOX assessment. Which control should the internal auditors plan to test themselves because it remains the customer's responsibility despite the external audit coverage?
Configuration of border firewalls protecting the SaaS platform
Badge-controlled access to the provider's colocation facility
Timely installation of operating-system patches on the CSP's virtualization hosts
Creation and de-provisioning of user accounts within the SaaS ERP modules
Answer Description
A SOC 1 Type II report describes the controls operated and tested at the service-provider level. Physical security of the datacenter, patch management of the virtualization hosts, and configuration of the perimeter firewalls are all provider-managed controls that have already been independently assessed in the CSP's report. Provisioning and de-provisioning user accounts inside the ERP application, however, is a logical access control exercised by the customer's own administrators. Because that activity directly affects the integrity of the customer's financial records and is not performed by the provider, it must be tested by the customer's internal auditors to satisfy SOX requirements. The other options are provider-side controls that can be relied upon through the external SOC report and therefore do not normally require additional internal testing.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a SOC 1 Type II report?
What is SOX compliance in relation to IT audits?
What is the difference between provider-managed controls and customer-managed controls?
A finance startup runs dozens of Linux containers in a managed Kubernetes cluster hosted by a public cloud provider. Management wants an extra layer of defense that will still be effective if an attacker achieves a container breakout at the application layer, but remains confined to the container's user space. The chosen control must explicitly limit which kernel functions the process inside each container can invoke, thereby reducing the blast radius of a compromise. Which hardening action BEST meets this goal?
Apply fine-grained seccomp and AppArmor profiles to every container to restrict available system calls and kernel capabilities.
Configure an admission controller that rejects any image pulled with the :latest tag.
Mount the host's Docker socket inside each pod so a security scanner can inspect running containers.
Place all worker nodes in an isolated private subnet with no inbound Internet access.
Answer Description
Applying seccomp and AppArmor (or SELinux) profiles to each container enforces a whitelist of allowed Linux system calls and capabilities. If an attacker escapes the application but remains inside the container, attempts to invoke disallowed kernel functions-such as loading kernel modules, changing network settings, or escalating privileges-will be blocked, reducing potential impact. Moving worker nodes to a private subnet improves network exposure but does not constrain kernel interactions. Disallowing the :latest tag helps with image provenance yet offers no runtime syscall restriction. Mounting the Docker socket greatly increases risk because it grants containers control over the host daemon, the opposite of the desired effect.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is seccomp and how does it work?
How does AppArmor enhance security compared to seccomp?
Why are :latest tags discouraged for container images?
Your company plans to load a customer billing table that contains full 16-digit primary account numbers (PANs) into a public cloud data-warehouse service so data scientists can execute ad-hoc SQL analytics. Corporate policy mandates that real PAN values must never leave the on-premises environment, yet analysts still need to run queries such as grouping by the first six digits (issuer BIN) and keep the field length unchanged. Which characteristic of tokenization, when compared with conventional encryption, makes it the most suitable control for this requirement?
Tokenization leverages homomorphic encryption so that full mathematical operations are performed on ciphertext without any performance penalty.
Tokens contain embedded cryptographic keys, eliminating the need for separate key management systems in the cloud.
Tokens can be generated to keep the original PAN length and selected digits visible while remaining non-mathematically reversible, allowing native analytics without exposing real data.
Tokenization primarily works by compressing sensitive fields, which lowers storage and bandwidth use while still allowing queries.
Answer Description
Tokenization replaces a sensitive value with a surrogate that is not derived through a mathematical algorithm, so it cannot be reversed without access to the token vault or mapping service. Because the organization defines the token format, it can mirror the original data's length, character set, and even preserve portions (for example, the first six and last four digits of a PAN). This lets existing applications and cloud data-warehouse functions-such as joins, sorting, or grouping by the preserved digits-operate without exposing the real PAN. Standard encryption, even when format-preserving, produces ciphertext that is still mathematically related to the plaintext and must be decrypted (or use more complex searchable encryption) before meaningful analytics can occur. Tokenization does not inherently provide compression, does not rely on homomorphic encryption, and still requires secure storage of token mappings; tokens themselves do not embed cryptographic key material.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How is tokenization different from encryption?
What is a token vault, and why is it important?
Why does tokenization allow analytics on preserved digits while encryption does not?
A DevSecOps team is building a multitenant SaaS application in the public cloud using an iterative Agile SDLC. To avoid costly re-work, the security architect wants to introduce threat modeling at the earliest point when the system's architecture is sufficiently defined but before any code is written. According to a traditional SDLC, which phase should the team target for this activity?
Testing and validation
Requirements gathering and analysis
Design and architecture
Implementation and coding
Answer Description
Threat modeling provides the most value when it is performed once the system architecture has taken shape but before implementation begins, allowing security requirements and design changes to be incorporated with minimal cost. In a classic SDLC, this aligns with the design (or architecture) phase, which follows requirements analysis and precedes coding. Performing threat modeling later, during implementation or testing, uncovers issues after code has been written, leading to greater re-work, while performing it during initial requirements gathering is premature because architectural details needed to model threats are not yet available.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is threat modeling important in the design phase?
What is the difference between Agile SDLC and traditional SDLC in terms of threat modeling?
What tools or techniques are commonly used for threat modeling in the design phase?
During a quarterly security review, your organization's SOC uncovers signs that a zero-day vulnerability in the cloud provider's hypervisor is being exploited. Company policy requires that the provider be notified within one hour and must send remediation updates every four hours. According to cloud operations best practices, in which contractual document should these incident-communication timelines already be formally defined so you can hold the vendor accountable?
Data processing addendum (DPA)
Business impact analysis (BIA) report
Statement of work (SOW)
Service level agreement (SLA)
Answer Description
A service level agreement (SLA) is the portion of the cloud services contract that spells out measurable service commitments, including availability targets, support response times, and security-incident notification and escalation requirements. By embedding time-bound communication and remediation obligations in the SLA, the cloud customer gains a contractual basis for enforcing prompt vendor response. A statement of work focuses on project deliverables rather than ongoing operational duties, a business impact analysis is an internal risk assessment tool, and a data processing addendum governs privacy and data protection terms-not real-time incident reporting expectations.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is the SLA the appropriate document for defining incident communication timelines?
What is the difference between an SLA and a SOW in cloud agreements?
How does embedding incident-response obligations in an SLA benefit cloud customers?
An e-commerce firm runs its application in a single public cloud region. Management mandates a disaster recovery solution that can resume service within 2 hours after a regional outage and lose at most 15 minutes of data, but budget constraints rule out a fully active-active deployment. Which cloud DR pattern best satisfies these RTO and RPO targets while controlling cost?
Pilot light architecture with continuous data replication and scripted instance provisioning
Backup-and-restore using daily snapshots stored in object storage
Active-active multi-site deployment with global load balancing
Warm standby environment with scaled-down but running duplicate infrastructure
Answer Description
A pilot light strategy keeps a minimal copy of the production environment (core databases and critical services) continuously replicated in a secondary region. Because only essential components are running, operating costs stay low. When a disaster occurs, additional application servers can be started from pre-created images and the data store is promoted, allowing service restoration in tens of minutes-well within the 2-hour RTO-and with data loss limited to the replication window of a few minutes, meeting the 15-minute RPO. Backup-and-restore usually exceeds both the RTO and RPO because large volumes must be restored. Warm standby keeps a scaled-down but fully functional environment continuously running, offering faster recovery but at higher ongoing cost than necessary. An active-active multi-site setup delivers near-zero RTO/RPO but is the most expensive option and was explicitly ruled out.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are RTO and RPO in disaster recovery?
How does the pilot light architecture help meet the RTO and RPO targets?
What are the key differences between pilot light architecture and warm standby?
A security architect is designing an on-premises OpenStack deployment. Corporate policy states that every compute host must validate the integrity of its BIOS, bootloader, and type-1 hypervisor at power-on, and must be able to prove its trusted state to the cloud controller before any tenant workloads are started. The team wants to use the cryptographic chip that is already soldered onto most enterprise server motherboards and avoid adding external devices. Which mechanism BEST satisfies these requirements?
Implement a Trusted Platform Module (TPM) on each host and enable secure/measured boot with remote attestation.
Install a dedicated Hardware Security Module (HSM) cluster to store encryption keys for the hypervisor.
Use self-encrypting drives (SEDs) that automatically wipe keys on reboot to prevent unauthorized boot tampering.
Deploy Network Access Control (NAC) using 802.1X to authenticate servers before they join the management VLAN.
Answer Description
A Trusted Platform Module (TPM) provides a hardware root of trust that can securely store cryptographic measurements of the BIOS, bootloader, and hypervisor during the boot process. Using secure or measured (trusted) boot, the TPM can sign these measurements so that a remote attestation service-such as OpenStack's Trusted Compute or cloud-provider-specific attestation services-can verify that the host has not been tampered with before allowing it to join the resource pool.
A Hardware Security Module (HSM) mainly protects cryptographic keys for applications and does not measure or attest to the integrity of the boot chain. Network Access Control (802.1X) governs port-level network admission and does nothing to ensure firmware or hypervisor integrity. Self-encrypting drives safeguard data at rest but cannot validate the overall platform's software stack at boot time. Therefore, deploying TPM-based secure/trusted boot is the correct control to meet the stated policy.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Trusted Platform Module (TPM)?
How does secure/measured boot work with a TPM?
What is remote attestation in relation to TPM?
During a security design review for a new cloud-based file-sharing platform, you are asked how the service will allow storage nodes to detect if a single bit in an object has been altered-whether by disk corruption or a malicious actor-simply by recomputing and comparing a checksum stored with the file. Which characteristic of a well-designed cryptographic hash function is essential for providing this type of integrity assurance?
Avalanche effect
Key stretching
Confusion
Forward secrecy
Answer Description
A fundamental requirement for detecting even the smallest modification to data is that the hash output change dramatically when any single bit of the input changes. This property is known as the avalanche effect. Without the avalanche effect, an attacker or an unintentional error might alter data while producing only a minor or predictable change in the hash, defeating integrity checks. Key stretching strengthens weak passwords against brute-force attacks but is unrelated to change detection. Confusion is a property of ciphers that hides relationships between plaintext, ciphertext and key, not hashes. Forward secrecy concerns session key derivation in encryption protocols and has no bearing on file integrity monitoring.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the avalanche effect in cryptographic hash functions?
How does a cryptographic hash function differ from encryption?
What other properties make a cryptographic hash function secure?
During an investigation in an IaaS environment, a security engineer discovers that an active Windows Server virtual machine may be exfiltrating sensitive data. The engineer can immediately perform any of the following actions: request a hypervisor memory dump of the running VM, trigger a crash-consistent snapshot of its virtual disks, download the cloud provider's API access logs, or retrieve the last 24 hours of firewall logs. To best preserve digital evidence in line with the accepted order of volatility, which action should the engineer perform first?
Download the cloud provider's API access logs before they are overwritten.
Trigger an immediate crash-consistent snapshot of the VM's virtual disks.
Request a hypervisor-level memory snapshot of the live virtual machine.
Collect the past 24 hours of firewall logs from the provider's archive.
Answer Description
Forensic collection generally follows the order of volatility principle: data most likely to change or disappear is captured before less-volatile information. In a live virtual machine, RAM contains running processes, encryption keys, network connections, and other transient artifacts that can be lost as soon as the system is shut down or altered. A hypervisor-level memory snapshot captures this volatile data with minimal impact on the guest. Virtual disk snapshots, API logs, and archived firewall logs are all less volatile because they are written to persistent storage and can be retrieved later without significant risk of loss or alteration. Therefore, acquiring the memory dump first best preserves critical evidence while maintaining its integrity.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the order of volatility?
Why is a hypervisor memory snapshot important during forensic investigations?
How does a crash-consistent snapshot differ from a hypervisor memory snapshot?
While performing routine monitoring, a cloud security engineer for a SaaS provider discovers evidence that personal data of EU residents may have been exposed through a misconfigured object storage bucket. The incident response team has confirmed that a personal data breach has likely occurred. In order to remain compliant with the General Data Protection Regulation (GDPR), what is the latest time frame the provider has to notify the competent supervisory authority after becoming aware of the breach, assuming no justification for delay?
Immediately (within one hour) regardless of any investigation.
Within 7 calendar days of confirming the breach.
No later than 72 hours after becoming aware of the breach.
Within 24 hours of detecting the breach.
Answer Description
GDPR Article 33 requires that the controller notify the competent supervisory authority "without undue delay and, where feasible, not later than 72 hours after having become aware of it." The regulation only permits delays beyond this window when a reasoned justification can be documented. A 24-hour window is sometimes recommended by best-practice guidance but is not mandated. Seven days or an immediate one-hour deadline are not recognized by GDPR and would either create unnecessary operational burden or fall short of the legal requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the GDPR and who does it apply to?
What is an object storage bucket and how does it relate to data breaches?
What happens if an organization fails to report a GDPR breach within 72 hours?
That's It!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.