ISC2 Certified Cloud Security Professional (CCSP) Practice Test
Use the form below to configure your ISC2 Certified Cloud Security Professional (CCSP) Practice Test. The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

ISC2 Certified Cloud Security Professional (CCSP) Information
About the ISC2 Certified Cloud Security Professional (CCSP) Exam
The Certified Cloud Security Professional (CCSP) certification from ISC2 is a globally recognized credential that validates an individual's advanced technical skills and knowledge to design, manage, and secure data, applications, and infrastructure in the cloud. Earning the CCSP demonstrates a professional's expertise in cloud security architecture, design, operations, and service orchestration. The latest version of the CCSP exam, updated in August 2024, consists of 125 multiple-choice questions that candidates have three hours to complete. To pass, a candidate must score at least 700 out of 1000 points. The exam questions are designed to be scenario-based, assessing a practitioner's ability to apply their knowledge in real-world situations.
Core Domains of the CCSP Exam
The CCSP exam is structured around six core domains, each with a specific weighting. These domains encompass the full spectrum of cloud security. The domains and their respective weights are: Cloud Concepts, Architecture and Design (17%), Cloud Data Security (20%), Cloud Platform & Infrastructure Security (17%), Cloud Application Security (17%), Cloud Security Operations (16%), and Legal, Risk and Compliance (13%). To be eligible for the exam, candidates generally need a minimum of five years of cumulative, full-time experience in Information Technology. This must include three years in cybersecurity and one year in one or more of the six CCSP domains.
The Value of Practice Exams in Preparation
Thorough preparation is key to success on the CCSP exam, and taking practice exams is a highly effective strategy. Practice tests help candidates to assess their knowledge, identify areas of weakness across the six domains, and become familiar with the question format and exam structure. By simulating the actual exam environment, practice questions also allow candidates to improve their time management skills and build confidence. Regularly reviewing mistakes made on practice tests provides an opportunity to revisit and reinforce challenging concepts, personalizing the study strategy for a more efficient and effective preparation process.

Free ISC2 Certified Cloud Security Professional (CCSP) Practice Test
- 20 Questions
- Unlimited time
- Cloud Concepts, Architecture and DesignCloud Data SecurityCloud Platform & Infrastructure SecurityCloud Application SecurityCloud Security OperationsLegal, Risk and Compliance
Free Preview
This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!
Your organization is moving an internal HR application to virtual machines hosted in a public IaaS environment. Security policy requires that employees continue to authenticate with their on-premises Active Directory credentials and that only the HR support group may administer the cloud resources used by the application. Which identity and access control solution best meets these requirements while honoring least-privilege principles?
Create individual IAM users in the cloud provider and enforce complex password rotation policies.
Embed shared root-level SSH keys into the VM images and distribute the key pair to the HR team.
Permit anonymous access to the cloud resource endpoints and rely solely on application-level authentication.
Configure SAML 2.0 federation between Active Directory Federation Services and the cloud provider, mapping AD groups to fine-grained IAM roles.
Answer Description
Federating the cloud provider with the corporate identity store using SAML 2.0 allows users to present their existing Active Directory credentials through single sign-on. Group claims in the SAML assertion can be mapped to narrowly scoped IAM roles so that only members of the HR support group receive the administration privileges needed for the workload, satisfying the principle of least privilege. Creating local cloud accounts would duplicate identities and require additional password management. Anonymous access removes all authentication, conflicting with policy. Embedding shared root-level SSH keys provides no fine-grained authorization and violates least-privilege requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is SAML 2.0 federation and how does it work?
Why is configuring local cloud IAM accounts a poor choice compared to SAML federation?
What are the key advantages of enforcing least-privilege principles in access control?
A company is concerned that virtual machines in its public-cloud VPC can still initiate east-west connections to other subnets even when each subnet has its own network security group (NSG). The cloud security architect is told to move toward a zero-trust model so that every packet between workloads is evaluated against identity, device posture, and real-time context instead of static IP rules. Which control BEST meets this requirement without adding a traditional perimeter firewall appliance?
Implement microsegmentation with an identity-aware, software-defined firewall that applies tag-based policies at each workload.
Migrate the workloads into a private cloud and separate them with dedicated VLANs.
Deploy a traditional next-generation firewall at the VPC's internet gateway to inspect all traffic.
Broaden the NSG CIDR ranges so all subnets are included under a single ruleset.
Answer Description
Zero-trust networking assumes no implicit trust based on location inside the VPC. Microsegmentation tools embed a software-defined firewall on or very close to every workload and build policies tied to verified identity, tags, and context (such as device health or time of day). Because rules follow the workload and are evaluated for every east-west flow, this approach enforces the zero-trust principle of continuous, identity-centric verification. Perimeter firewalls and VLAN moves rely on coarse network boundaries, while widening CIDR blocks further weakens isolation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is microsegmentation in cloud security?
What is the meaning of 'east-west traffic' in a network?
How does zero-trust differ from traditional perimeter security approaches?
During a design workshop for a new DevOps platform, your team insists that developers must be able to spin up and tear down virtual machines and databases at any time through a web portal or API, without opening tickets with the provider's operations staff. In NIST's cloud definition, which essential characteristic directly addresses this requirement?
Measured service
On-demand self-service
Broad network access
Resource pooling
Answer Description
NIST Special Publication 800-145 lists five essential characteristics of cloud computing. "On-demand self-service" means a consumer can unilaterally provision computing capabilities, such as server time and network storage, automatically without requiring human interaction with each service's provider. This matches the requirement that developers provision and de-provision resources whenever needed via a portal or API. Resource pooling refers to the provider's shared resource model, broad network access concerns ubiquitous connectivity, and measured service is about metering resource use. None of those characteristics alone guarantee users can obtain resources without provider involvement, so they are incorrect.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is NIST's definition of 'On-demand self-service'?
How does 'On-demand self-service' help developers in a DevOps environment?
Why don't 'Resource pooling,' 'Broad network access,' or 'Measured service' fulfill this requirement?
During a quarterly budget review, a company's CFO asks the cloud architect how the organization can accurately allocate infrastructure costs to each department based on the exact amount of virtual CPU, storage, and network bandwidth they consume. Which essential cloud computing characteristic directly enables this type of departmental chargeback model?
Broad network access
Multi-tenancy
Rapid elasticity
Measured service
Answer Description
Measured service is one of the NIST-defined essential characteristics of cloud computing. It refers to the cloud provider's ability to automatically control and optimize resource use by leveraging metering capabilities that monitor, report, and bill on a per-usage basis. Because consumption is precisely measured, an organization can implement chargeback or show-back processes tied to actual resource utilization. Rapid elasticity deals with automatic scaling, broad network access addresses ubiquitous connectivity, and multi-tenancy describes logically isolated sharing of pooled resources; none of these characteristics alone provide the detailed metering required for cost allocation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the NIST definition of Measured Service in cloud computing?
How does Measured Service contribute to cost allocation in an organization?
What technologies enable Measured Service in cloud computing?
A financial services firm stores customer information in a cloud-hosted relational database. You are asked to implement automated discovery of personally identifiable information (PII) so that the data protection team can track where sensitive fields are located before applying controls. Which approach is most suitable for discovering PII that resides in this structured data set while keeping the rate of false positives low?
Export all tables to flat files and run regular-expression searches for Social Security number and credit-card patterns across the dumps.
Deploy an agentless network DLP appliance to inspect outbound SQL traffic for PII signatures as users query the database.
Encrypt the entire database with fully homomorphic encryption so discovery tools can scan the ciphertext without exposure.
Analyze the database's system catalog and column metadata to identify fields whose names, data types, or built-in sensitivity tags indicate they may contain PII.
Answer Description
Because the data reside in a relational (structured) database, the quickest and most accurate way to discover PII is to query the database's system catalog to obtain table and column definitions, data types, and any existing sensitivity or classification tags. Leveraging this metadata lets a discovery tool focus directly on columns likely to contain PII (for example, CHAR(9) columns named SSN or customer_ssn), reducing the need for pattern matching across every row and thus minimizing false positives.
Scanning exported flat files with regular expressions can locate patterns but typically produces many false positives and misses context such as column semantics. Network DLP only observes data in motion; it does not enumerate where data are stored inside the database. Applying homomorphic encryption is a protection technique, not a discovery method, and would actually make content inspection impossible unless decrypted first. Therefore, using the database catalog and schema metadata is the most appropriate discovery technique for structured data.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a database system catalog?
How do sensitivity tags help in PII discovery?
Why is regular-expression searching prone to false positives?
Your organization is refactoring a monolithic web application into stateless microservices that will run in containers managed by a cloud-native orchestration platform. Management wants the new deployment to add or remove service instances automatically as traffic fluctuates, without manual administrator intervention. Which core capability of container orchestration platforms most directly enables this requirement and aligns with the cloud characteristic of rapid elasticity?
Sharing the underlying host operating system kernel to minimize virtualization overhead
Use of overlay networking to decouple container networks from physical hosts
Automatic horizontal scaling of containers based on real-time resource or application metrics
Built-in secret management for injecting credentials at container start-up
Answer Description
Container orchestration frameworks such as Kubernetes and Amazon Elastic Kubernetes Service include native auto-scaling features (for example, the Horizontal Pod Autoscaler) that monitor metrics like CPU usage or custom application signals and automatically start or terminate additional container replicas to match demand. This on-demand horizontal scaling delivers the rapid elasticity promised by cloud computing. While sharing a host kernel, overlay networking, and secret management are valuable functions, none of them directly provide the capability to grow and shrink the number of running service instances in response to load.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is horizontal scaling in container orchestration platforms?
How does Kubernetes implement auto-scaling?
What is rapid elasticity in cloud computing?
Your operations team reports that the public-cloud virtual machines hosting the company's e-commerce site are saturated every Friday night. Management wants the environment to automatically add or remove instances in real time so performance stays steady and charges reflect only what is actually consumed-without opening tickets or calling the provider. According to the NIST definition of cloud computing, which essential characteristic addresses this requirement?
On-demand self-service
Rapid elasticity
Resource pooling
Measured service
Answer Description
NIST Special Publication 800-145 lists rapid elasticity as one of the five essential cloud characteristics. Rapid elasticity means resources can be provisioned and released quickly-often automatically-to scale out or in commensurate with demand, and to appear to customers as unlimited. On-demand self-service allows unilateral provisioning but does not inherently include automatic scaling. Resource pooling is about multi-tenant sharing of provider resources, and measured service refers to metering and billing, not dynamic capacity expansion.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is rapid elasticity in cloud computing?
How does rapid elasticity differ from on-demand self-service?
What role does resource pooling play in a cloud environment?
Your organization is standardizing on a single data classification policy (Public, Internal, Confidential, Restricted) before migrating workloads to AWS, Azure, and GCP. Planned controls-such as automatic encryption, data loss prevention, and lifecycle rules-will trigger from metadata tags that carry the classification value on every object or datastore. Which planning decision will most help prevent gaps in those controls as data moves between the three cloud platforms?
Permit project teams to define additional custom classification levels so they can refine the four-level scheme as needed.
Create separate tag schemes for each provider and translate the labels through an API proxy when data is replicated.
Adopt a uniform, provider-agnostic set of classification tags that uses the same names and format in every cloud account and subscription.
Tag only personally identifiable information (PII) as sensitive and leave all other data untagged to simplify tagging workflows.
Answer Description
To ensure that technical controls activate reliably in each cloud, the same label must appear on every object no matter where it is stored. Defining one authoritative classification taxonomy and requiring all teams to use an identical tag key-value syntax across AWS, Azure, and GCP eliminates the need for per-provider translations and removes the risk that data will lose its meaning when copied or migrated. Relying on provider-specific labels or letting users invent their own introduces inconsistency; focusing only on PII overlooks other sensitive data classes and does not address the enforcement consistency problem.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is a uniform, provider-agnostic set of classification tags important in a multi-cloud environment?
What problems arise from using provider-specific or user-defined classification tags?
Why is it insufficient to tag only PII as sensitive in this scenario?
Your organization runs several public IaaS virtual machines that process regulated data. The security team is worried that a malicious tenant hosted on the same physical server could exploit a hypervisor weakness to escape its guest instance and gain access to your workloads. Which cloud-specific threat category best describes this concern?
Exploitation of shared technology vulnerabilities resulting in tenant isolation failure
Vendor lock-in that limits workload portability between providers
Phishing attacks against the cloud management console
Data remanence caused by insufficient media sanitization
Answer Description
The scenario involves an attacker in one tenant environment leveraging a hypervisor or other shared infrastructure weakness to break the logical separation that normally isolates customers in a multi-tenant cloud. This is commonly classified as an exploitation of shared technology vulnerabilities or isolation failure. Phishing targets user credentials rather than hypervisor flaws; data remanence deals with residual data on retired media; vendor lock-in is a business risk, not a technical cross-tenant threat.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a hypervisor in cloud computing?
How does tenant isolation work in a multi-tenant cloud environment?
What is exploitation of shared technology vulnerabilities?
While performing a risk assessment on a public IaaS provider, you discover that customer virtual machines are frequently live-migrated between hosts for load balancing. The migration traffic travels across an unsegmented management network and is not encrypted. Which risk should you flag as the most significant to confidential data handled by a tenant workload?
Complicated guest operating system patch schedules caused by host reallocation
Exposure of in-memory tenant data to interception during migration traffic
Temporary performance degradation on the VM due to increased hypervisor overhead
Violation of per-CPU software licensing as the VM lands on differently licensed hosts
Answer Description
During live migration, the entire contents of a VM's memory and CPU state are streamed across the management network. If that traffic is sent in clear text on an unsegmented network, a malicious insider or attacker with access to the network can capture sensitive data such as encryption keys or personally identifiable information resident in RAM. Performance impact, software licensing, and guest OS patching are operational considerations, but they do not directly threaten the confidentiality of data in transit during migration, making data exposure the primary risk in this scenario.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is live migration in the context of IaaS providers?
Why is encryption important for live migration traffic?
What is an unsegmented management network, and why is it a risk?
Your organization is evaluating cloud providers. Developers insist they must be able to create, modify, and delete virtual machines through a web portal or API at any time without opening support tickets. Which NIST-defined cloud characteristic must the provider explicitly demonstrate to satisfy this requirement?
On-demand self-service
Measured service
Resource pooling
Rapid elasticity
Answer Description
The scenario describes the customer provisioning, altering, and de-provisioning resources unilaterally through a self-service interface. NIST identifies this as on-demand self-service. Rapid elasticity refers to the automatic scaling of capacity, resource pooling concerns abstracting and sharing provider resources among multiple customers, and measured service relates to metering and chargeback-all important but none guarantee user-initiated provisioning without provider interaction.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is 'on-demand self-service' in cloud computing?
How does 'on-demand self-service' differ from 'rapid elasticity'?
Why is 'on-demand self-service' important for developers?
During a project briefing, the CIO notes that the cloud provider will draw CPU, memory and storage from a shared hardware platform and dynamically allocate those resources to any tenant that needs them, while shielding customers from the exact physical location of their workloads. According to the NIST definition of cloud computing, which essential characteristic is the CIO describing?
Rapid elasticity
Resource pooling
On-demand self-service
Broad network access
Answer Description
The described scenario matches the NIST essential characteristic called "resource pooling." Resource pooling refers to the provider's use of a multi-tenant model to serve multiple customers with dynamically assigned physical and virtual resources, with location independence so customers cannot tell (and often do not control) the exact hardware in use. Rapid elasticity focuses on quick scaling, broad network access emphasizes ubiquitous connectivity, and on-demand self-service involves customers provisioning resources without human interaction. None of those specifically require resources to be abstracted from their physical location and shared among tenants; that capability is unique to resource pooling.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does 'resource pooling' mean in cloud computing?
What is the difference between resource pooling and rapid elasticity?
Why is the physical location of workloads hidden in resource pooling?
Your organization is a SaaS provider hosting its application on a fleet of Linux-based virtual machines in a public cloud. A critical vulnerability in the OS kernel has just been disclosed and a vendor patch is available. To follow sound cloud security hygiene and minimize configuration drift, which action should the provider take first?
Update and test the hardened golden image in a staging environment, then redeploy instances from this new baseline.
Apply a network egress block on the affected VMs and plan to revisit patching during the next regular maintenance window.
Push the patch to every production VM immediately, skipping testing to reduce exposure.
E-mail customers advising them to apply the patch because guest OS maintenance is their responsibility.
Answer Description
A SaaS provider owns responsibility for the entire application stack, including the guest operating system. Good hygiene requires that patches be incorporated into the standard build so every new or rebuilt instance starts from a known-good state. The recommended first step is therefore to update the hardened golden image, verify it in a staging environment, and then redeploy or rebuild production instances from that patched baseline. Patching live production systems without testing risks stability issues, simply notifying customers shifts responsibility incorrectly, and relying on network blocks leaves the vulnerability un-remediated.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a hardened golden image?
Why is testing patches in a staging environment important?
Why does a SaaS provider have responsibility for the guest OS?
Your organization collects security logs from cloud-hosted virtual machines and must keep them for potential litigation. The security architect needs to ensure that any individual log file can later be shown to be (1) exactly the same bits that were gathered at collection time and (2) unquestionably linked to the administrator who performed the collection. Which approach best meets both chain-of-custody and non-repudiation requirements for each log file?
Digitally sign each log file with the organization's root CA private key and record the signature hash on a blockchain ledger.
Generate a SHA-256 hash of the log at collection, then place the hash, collection timestamp, and collector's certificate inside a digitally signed manifest kept with the file.
Write logs directly to a storage bucket configured with write-once-read-many (WORM) retention and governance-mode legal hold.
Encrypt each log file with AES-256 and store the encryption key in the cloud provider's key-management service.
Answer Description
Chain of custody demands a verifiable record that the evidence has not been altered from the moment it is collected. Calculating a cryptographic hash (such as SHA-256) at the time of collection establishes an integrity reference. Storing that hash inside a time-stamped manifest that is itself digitally signed-with the collector's X.509 private key-creates an auditable record tying the evidence to a specific, authenticated identity. The digital signature delivers non-repudiation because the signer cannot later deny performing the action, and any change to the manifest or the log alters the signature validation. Simply encrypting data, enabling WORM storage, or writing to a blockchain may help with confidentiality or tamper evidence, but without a collector-bound digital signature and an original hash, they do not fully satisfy both integrity across the chain of custody and non-repudiation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of a SHA-256 hash in relation to chain of custody and data integrity?
How does a digitally signed manifest assist with non-repudiation?
What are the advantages of storing logs on a blockchain compared to a digitally signed manifest?
A security architect must choose a data loss prevention (DLP) deployment option that allows the organization to continuously inspect files already stored in sanctioned SaaS applications such as Microsoft 365 and Box. The solution must not require tunneling user traffic through an on-premises proxy or installing new endpoint agents. Which approach best meets these requirements?
Deploy endpoint DLP agents on all user devices to monitor file activity.
Insert an SMTP relay with DLP capabilities in front of the corporate mail server.
Route traffic through an on-premises secure web gateway using the ICAP protocol for DLP inspection.
Use an API-based cloud DLP/CASB connector to the SaaS tenant.
Answer Description
API-based integration with the cloud application (often delivered by a CASB or cloud-resident DLP engine) authenticates to the SaaS provider and scans the tenant's existing content at rest. Because inspection occurs directly via the provider's APIs, no network redirection or host agent is needed.
- A secure web gateway or ICAP proxy can only see data in motion that passes through the proxy; it cannot reach content already stored in the cloud.
- Endpoint DLP agents inspect data in use or in motion on the host but do not have native access to files that were uploaded earlier from another device.
- An SMTP DLP gateway inspects email traffic only and offers no visibility into files stored in collaboration platforms. Therefore, API-based DLP is the only option that satisfies continuous inspection of at-rest SaaS data without altering network paths or endpoints.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an API-based cloud DLP/CASB connector?
Why can't a secure web gateway using the ICAP protocol fulfill this requirement?
What limitations do endpoint DLP agents have for cloud data inspection?
During a redesign of an e-commerce application hosted in a public IaaS cloud, the web tier will be deployed on dozens of auto-scaled virtual machines. For performance, each VM writes temporary session cache files to local disk, but the data has no value once the VM is terminated. To reduce cost and limit residual-data exposure, which cloud storage type should you specify for those cache volumes?
Long-term object storage class designed for infrequent access and archival
Raw block storage mapped directly to the host's physical disk for persistent use
Ephemeral instance storage that is automatically deleted when the VM is stopped
Network file share backed by durable distributed storage
Answer Description
Ephemeral (instance-attached) storage is provisioned on physical disks that are directly attached to the hypervisor host. Cloud providers automatically wipe this storage when the associated virtual machine is stopped, terminated, or migrated, eliminating residual-data concerns and avoiding charges once the instance is gone-making it ideal for short-lived, non-persistent data such as session caches.
Long-term object storage classes are designed for durable retention and charge for both capacity and retrieval; using them for transient cache files would increase cost and leave data remnants unless explicitly deleted. Raw block storage mapped directly to physical devices is intended for workloads that need persistent low-level disk access (for example, databases) and normally remains allocated until manually detached, so cached data could persist inadvertently. A network file share on durable distributed storage is likewise persistent, incurs additional latency, and would retain the session data beyond the VM's life. Therefore, ephemeral storage best satisfies the requirements of low cost and automatic data destruction.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is ephemeral instance storage?
How does ephemeral storage compare to other cloud storage types?
Why is ephemeral storage ideal for session cache files?
A public IaaS provider uses KVM to host multitenant workloads. A critical hypervisor privilege-escalation (VM-escape) flaw that abuses direct device passthrough handling has just been disclosed. While vendor patches are still being validated, which immediate action will most directly reduce the likelihood that a malicious tenant can break out of its guest and reach the host or neighboring tenants?
Enable memory page deduplication so identical memory pages are shared across guest VMs.
Store every tenant's encryption keys inside the same virtual machine that uses them to avoid network exposure.
Disable all PCI, USB, and other device passthrough so guests use only standard virtual devices.
Place each tenant in a separate virtual network and enforce restrictive security group rules.
Answer Description
Disabling all forms of PCI, USB, or other device passthrough removes the vulnerable code path that the newly disclosed flaw exploits, forcing each guest to use only emulated or paravirtualized devices that remain under the hypervisor's complete control. This action immediately reduces the probability of guest-to-host escape without waiting for patch deployment. Network segmentation, key placement, and memory deduplication do not address the hypervisor interface that the exploit targets, so they provide little or no protection against the initial breakout.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a hypervisor privilege-escalation (VM-escape) flaw?
What does device passthrough mean in virtualization?
How does disabling device passthrough protect against hypervisor flaws?
Your organization is building a microservice that will run in a Kubernetes cluster and intends to use a popular open-source reverse-proxy image pulled from a public registry. To satisfy the company policy that mandates deployment of only validated open-source software, which action best demonstrates that the image has been properly validated before it is promoted to the production registry?
Pull the image only from its official repository on Docker Hub, trusting that the maintainers keep it secure and up to date.
Deploy the image in an isolated namespace first and rely on runtime behavioral monitoring to spot suspicious activity.
Fork the image's source code into an internal Git repository and disable automatic updates so the code base remains unchanged.
Scan the container image with an SCA tool to create an SBOM and address any reported CVEs before copying it into the enterprise registry.
Answer Description
Validating open-source software goes beyond simply trusting the origin or isolating the runtime. The security team needs verifiable evidence that the exact bits being deployed are known, scanned, and tracked. Generating a software bill of materials (SBOM) with a Software Composition Analysis (SCA) tool exposes all third-party components inside the image and maps them to known CVEs, enabling remediation or documented risk acceptance. Pulling from an "official" repository or forking the code without scanning provides no assurance about hidden vulnerabilities. Relying only on runtime monitoring detects issues after deployment, not before, and does not meet the definition of pre-deployment validation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an SBOM?
What is an SCA tool and what does it do?
Why is runtime behavioral monitoring not enough for software validation?
A healthcare provider is migrating electronic health record data that includes patient Social Security numbers to a multi-tenant SaaS platform. Regulations state the cloud provider must never be able to view the real SSNs, yet the application must still perform exact-match searches on that field and the organization needs the ability to restore the original values during legal discovery. Which data-protection technique best satisfies these requirements?
Irreversible hashing of the SSN with SHA-256 and a unique salt
Static data masking applied to the SSN before upload
Format-preserving encryption of the SSN using AES-FF1 with client-side key management
Tokenization of the SSN with a centrally managed on-premises token vault
Answer Description
Tokenization replaces the sensitive value with a surrogate (token) and stores the original value in a separate, secure token vault that the cloud provider cannot access. Because the same input will always return the same token (deterministic tokenization), the SaaS application can perform equality searches on the tokenized SSN while the real SSN remains hidden. When necessary, the organization can reverse the process by querying the vault to retrieve the original value.
Hashing with a salt is intentionally one-way, so the original SSN cannot be recovered, violating the legal discovery requirement. Static data masking permanently alters or removes sensitive characters, likewise preventing restoration. Format-preserving encryption is reversible, but without giving the provider access to the encryption key the application could not perform direct equality searches on ciphertext, and exposing the key to the SaaS operator would violate the requirement that the provider never see the real data. Therefore, tokenization with an on-premises mapping vault is the most appropriate choice.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is tokenization and how does it protect sensitive data?
How does deterministic tokenization enable equality searches?
Why is tokenization preferred over encryption for this use case?
A SaaS provider receives an email from a customer asking that five new employee accounts be added to the tenant. The provider's operator signs in to the provider-side management console, creates the accounts, assigns them the correct role, and then informs the customer that the service is ready for use. Under the ISO/IEC cloud reference architecture, which cloud computing activity is the operator performing when creating and configuring those user accounts?
Cloud service provisioning
Cloud service usage
Configure
Cloud service support
Answer Description
ISO/IEC 17788 and ISO/IEC 17789 categorize provider tasks that change the parameters of an already-provisioned cloud-service instance-such as creating or deleting user accounts, assigning roles, modifying quotas, or adjusting policy settings-under the Configure activity. Cloud service usage is carried out by the customer when actually using the application, support involves helping the customer resolve incidents or answer questions, and provisioning refers to the automated creation of the initial service instance. Therefore, the operator's actions constitute the Configure activity.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is ISO/IEC 17788 and ISO/IEC 17789?
What does Configure activity mean in the ISO/IEC cloud reference architecture?
How does Configure activity differ from provisioning, usage, and support?
Smashing!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.