00:20:00

ISC2 Certified Secure Software Lifecycle Professional (CSSLP) Practice Test

Use the form below to configure your ISC2 Certified Secure Software Lifecycle Professional (CSSLP) Practice Test. The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

Logo for ISC2 Certified Secure Software Lifecycle Professional (CSSLP)
Questions
Number of questions in the practice test
Free users are limited to 20 questions, upgrade to unlimited
Seconds Per Question
Determines how long you have to finish the practice test
Exam Objectives
Which exam objectives should be included in the practice test

ISC2 Certified Secure Software Lifecycle Professional (CSSLP) Information

What is the CSSLP Certification

The Certified Secure Software Lifecycle Professional (CSSLP) from ISC2 validates that a software professional can integrate security best practices into every phase of the development life cycle. While many security credentials focus on infrastructure or operations, CSSLP zeroes in on building security in from the first requirements workshop through retirement of an application. Holding the certification signals to employers and customers that you can help reduce vulnerabilities, meet compliance mandates, and ultimately ship more resilient software.

How the Exam Is Structured

The current CSSLP exam is a computer-based test containing 125 multiple-choice questions delivered over a three-hour session. A scaled score of 700 out of 1,000 is required to pass. Content is distributed across eight domains that mirror the secure software development life cycle: 1) Secure Software Concepts, 2) Secure Software Requirements, 3) Secure Software Architecture & Design, 4) Secure Software Implementation, 5) Secure Software Testing, 6) Secure Lifecycle Management, 7) Secure Software Deployment, Operations & Maintenance, and 8) Secure Software Supply Chain. Because any topic in these domains is fair game, candidates need both breadth and depth of knowledge across process models, threat modeling, secure coding, DevSecOps pipelines, and supply-chain risk management.

The Power of Practice Exams

One of the most effective ways to close a knowledge gap and build exam-day confidence is to take high-quality practice exams. Timed drills acclimate you to the three-hour pacing and help you learn how long you can spend on each question before moving on. Equally important, comprehensive explanations (not just answer keys) reveal why a particular choice is correct, which deepens conceptual understanding and highlights recurring exam patterns. Aim to review every explanation—even the questions you answer correctly—to reinforce core principles and discover alternate ways a concept can be tested. Track scores over multiple attempts; trending upward is a reliable indicator that your study plan is working.

Preparation Tips

Begin your study schedule at least eight to twelve weeks out, mapping the official ISC2 exam outline to specific learning resources such as the (ISC)ÂČ CSSLP CBK, OWASP documentation, and language-specific secure-coding references. After you’ve covered each domain, fold in practice exams and use their analytics to guide targeted review sessions. In the final two weeks, simulate the exam environment: mute notifications, sit for a full three-hour block, and practice reading every question twice before locking in an answer. Coupled with real-world experience and a disciplined study routine, these strategies position you to walk into the testing center—and out with the CSSLP credential—on your first attempt.

ISC2 Certified Secure Software Lifecycle Professional (CSSLP) Logo
  • Free ISC2 Certified Secure Software Lifecycle Professional (CSSLP) Practice Test

  • 20 Questions
  • Unlimited time
  • Secure Software Concepts
    Secure Software Lifecycle Management
    Secure Software Requirements
    Secure Software Architecture and Design
    Secure Software Implementation
    Secure Software Testing
    Secure Software Deployment, Operations, Maintenance
    Secure Software Supply Chain

Free Preview

This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!

Question 1 of 20

During a code-review of an analytics platform, you discover that every data analyst logs in through a single database account with read permission to all schemas. Which change would most directly apply the need-to-know aspect of least privilege for these analysts?

  • Issue each analyst a personal account restricted to only the specific schemas they require and set the grants to expire after the project milestone.

  • Move the production database to a read-only replica that all analysts can query, keeping the shared account.

  • Enable transparent data encryption on every tablespace while continuing to use the shared account for access.

  • Require analysts to justify access in the ticketing system but still authenticate through the common account.

Question 2 of 20

Your SaaS platform runs a stateless REST API on container clusters in a single public cloud availability zone. Management asks for greater resiliency so that customer requests continue with minimal disruption if the zone becomes unavailable. Which architectural change best satisfies this requirement while aligning with high-availability design principles?

  • Schedule nightly database and filesystem backups to object storage in a different region for disaster recovery.

  • Enable horizontal pod auto-scaling inside the existing zone to add containers when CPU usage peaks.

  • Place the API behind a global CDN to cache responses and reduce latency for end users.

  • Deploy identical API clusters in a second availability zone and load-balance traffic across both zones using health checks.

Question 3 of 20

During a build pipeline, you need to confirm that a 50 KB JSON configuration file checked into source control has not been tampered with before it is packaged into the container image. Which approach provides the most reliable, automated proof of the file's integrity?

  • Encrypt the file with AES-256 and verify that decryption succeeds before packaging.

  • Store the file on a RAID 1 volume to guarantee bit-level consistency.

  • Compute a SHA-256 digest of the file during each run and compare it to the previously stored baseline hash.

  • Compress the file with gzip and compare the resulting archive size to yesterday's build.

Question 4 of 20

During a secure‐update project, you must ensure that desktop clients can verify both the origin and the untampered state of each new executable they download from the company's server. Which control is the most effective way to meet this requirement?

  • Apply code obfuscation to make the binary harder to analyze

  • Digitally sign each release with a trusted code-signing certificate and have clients verify the signature before execution

  • Publish a separate SHA-256 checksum that users can compare after downloading

  • Distribute the files over HTTPS to prevent interception during download

Question 5 of 20

Your organization hosts a public REST API on a single virtual machine in one cloud availability zone. Load testing shows CPU saturation during peak usage, and planned reboots cause several minutes of outage. To align with the availability principle of redundancy, which architectural change will most effectively increase service uptime?

  • Deploy identical API instances in multiple availability zones behind an auto-scaling load balancer.

  • Enable host-based firewall rules that rate-limit all incoming requests above a defined threshold.

  • Upgrade the existing virtual machine to a larger compute class with more CPU and memory.

  • Schedule daily encrypted snapshots of the virtual machine to remote object storage.

Question 6 of 20

During a forensic investigation of an unauthorized database dump, the response team discovers that every application node keeps its audit records in plain-text files on the local disk and developers can alter or delete those files at will. Which control, if it had been implemented, would have most directly preserved accountability for the actions that led to the breach?

  • Scheduling weekly differential backups of the application servers

  • Requiring multi-factor authentication for all privileged user accounts

  • Encrypting local audit files with the application's TLS certificate

  • Centralized, write-once log collection stored on a server where only security administrators have append-only rights

Question 7 of 20

Your team is refactoring a monolithic payment application into small containerized microservices deployed in separate subnets across two cloud regions. From a defense in depth perspective, which benefit of this distributed design most directly reduces risk if the checkout microservice is breached?

  • Blue/green deployments guarantee uninterrupted availability during version rollouts.

  • Internal traffic can stay unencrypted, eliminating the overhead of TLS inside the cluster.

  • Horizontal auto-scaling lets the platform absorb sudden spikes in legitimate or malicious traffic without manual action.

  • Isolation of each function limits lateral movement through service-level identities and network policies.

Question 8 of 20

A Linux microservice must accept HTTP connections on port 80, then parse user-supplied files. To honor the runtime least-privilege principle, which implementation approach is most appropriate?

  • Run the entire service as root but restrict outbound traffic with iptables rules.

  • Start as root solely to bind to port 80, then immediately setuid to a non-privileged service account before handling any requests.

  • Launch the service as an unprivileged user and use sudo each time it needs to write its log file.

  • Execute the microservice as root inside a Docker container, relying on container isolation for protection.

Question 9 of 20

During a security requirements workshop for a European e-commerce application that will store customer details and cardholder data, a developer asks how GDPR and PCI DSS differ in their authority. Which statement correctly identifies the governing source for each set of requirements?

  • Both GDPR and PCI DSS are voluntary frameworks with no contractual or legal penalties for non-compliance.

  • PCI DSS governs all personal data across Europe, and GDPR applies only to cardholder data handled worldwide.

  • GDPR is an industry guideline without legal force; PCI DSS is US federal law enforced by regulators.

  • GDPR is binding EU legislation, while PCI DSS is an industry standard defined by payment card brands.

Question 10 of 20

During a production release, your organization wants to enforce segregation of duties through multi-party control. Which of the following practices BEST meets this goal?

  • Developers are blocked from viewing production logs unless they open a support ticket.

  • The CI/CD pipeline automatically deploys code to production once automated tests succeed.

  • A single release manager uses their personal hardware security token to sign and deploy the build.

  • Two engineers must independently authenticate to reveal separate portions of the production signing key before code can be signed.

Question 11 of 20

An auditor reviewing your software release process notes that compiled installers are distributed to customers via an HTTPS-protected portal. Customers often move the files onto isolated production networks before installation. Which security principle is still insufficiently protected, and what control would best mitigate the gap?

  • Encrypt the binaries with AES-256 before storing them on the portal to strengthen confidentiality.

  • Digitally sign each release package with the organization's private code-signing certificate to preserve integrity.

  • Require mutual TLS with client certificates for all customer downloads to enhance authentication.

  • Replicate the download portal across multiple geographic regions using a CDN to improve availability.

Question 12 of 20

Your e-commerce platform runs two application servers in an active-passive cluster behind a health-checking load balancer. During a resilience test you hard-stop the active node; traffic is quickly redirected but customers are forced to log in again and open carts are lost. Which change would most effectively increase service availability during such failovers?

  • Increase the load balancer's health-check interval to reduce the likelihood of unnecessary failovers.

  • Move the passive node to a different network segment to add geographic diversity.

  • Replace the load balancer with round-robin DNS to distribute connections across both servers.

  • Enable stateful session replication (or a shared session store) so both nodes maintain identical user sessions in real time.

Question 13 of 20

Your development team publishes nightly builds of a CLI tool and a separate text file containing the SHA-256 checksum for each binary. Before installation, users are told to calculate the checksum locally and compare it to the published value. Which security objective is primarily being addressed?

  • Non-repudiation by providing verifiable proof of the publisher's identity

  • Availability of the binary by enabling redundant download sources

  • Confidentiality of the binary by ensuring it cannot be read in transit

  • Integrity of the binary by detecting any unauthorized modification

Question 14 of 20

During threat modeling, your team identifies that the order-processing microservice depends on an external OAuth token validator. To satisfy the fail secure design principle, how should the microservice behave if the validator becomes unreachable at run time?

  • Reject every incoming request until the validator is reachable again, returning an authorization failure.

  • Accept requests that present tokens found in a five-minute cache while logging a warning.

  • Queue the requests and complete them after connectivity is restored, skipping a second token check.

  • Temporarily bypass authentication but record detailed audit logs for later review.

Question 15 of 20

During a quarterly security audit of your cloud IAM, you discover that several service accounts have accumulated legacy permissions unrelated to their current functions. Which corrective measure most directly addresses the problem of excessive entitlements?

  • Enable server-side encryption of all storage buckets and rotate the encryption keys annually.

  • Replicate the source-code repository to a second region to improve disaster recovery capabilities.

  • Run scheduled access-certification campaigns that require managers to review and explicitly approve or revoke every entitlement for each account.

  • Enforce VPN and multifactor authentication for all service accounts before they can access the environment.

Question 16 of 20

Your team is designing an API for an electronic health record system hosted in a cloud environment. All users authenticate via SAML SSO. The security policy states that any licensed physician may read any patient record, but only the record's attending physician may modify it. Which authorization model best satisfies this requirement at the API layer?

  • Role-Based Access Control (RBAC) with static physician and nurse roles

  • Attribute-Based Access Control (ABAC) policies evaluated by the API gateway

  • Mandatory Access Control (MAC) that labels each patient record with a fixed sensitivity level

  • Discretionary Access Control (DAC) allowing physicians to maintain access control lists on their patients' records

Question 17 of 20

During a threat modelling workshop, the business owner states that after a critical outage the application must be fully operational again within two hours, and no more than ten minutes of transactional data may be lost. Which pair of non-functional continuity metrics should the security architect document to capture these requirements?

  • Maximum Tolerable Downtime (MTD) and Service Level Objective (SLO)

  • Mean Time Between Failures (MTBF) and Mean Time To Repair (MTTR)

  • Mean Time To Failure (MTTF) and Mean Time To Detect (MTTD)

  • Recovery Time Objective (RTO) and Recovery Point Objective (RPO)

Question 18 of 20

Your SaaS platform runs a stateless REST API on two cloud VMs behind a load balancer. Marketing drives unpredictable traffic surges that overload the servers and cause 503 errors. The company uses pay-as-you-go billing and wants minimal operations effort. Which architecture change most improves availability through scalability?

  • Replace the two VMs with one larger VM that has double the CPU and memory

  • Create an auto-scaling group that launches extra VM instances when average CPU exceeds a defined threshold

  • Enable strict rate limiting on the load balancer to drop requests above a set threshold

  • Schedule nightly image backups of the existing VMs to an off-site repository

Question 19 of 20

While planning the audit subsystem for a new payment-processing microservice, the security architect must ensure investigators can later reconstruct a precise sequence of user actions for accountability purposes. Which control MOST directly supports this requirement?

  • Anonymize user identifiers in logs before forwarding them to the SIEM.

  • Compress and archive all logs to offline storage after 24 hours.

  • Purge high-volume debug logs daily to conserve local disk space.

  • Synchronize every host and container to a trusted time source and timestamp each log entry.

Question 20 of 20

A DevSecOps team must enable employees to access five internal microservices. To apply the economy of mechanism principle while also reducing password fatigue for users, which authentication approach is MOST appropriate?

  • Require users to maintain separate usernames and passwords for every microservice but enforce identical password complexity rules.

  • Integrate all services with a centralized SAML/OIDC-based single sign-on service provided by a well-vetted identity provider.

  • Store each user's credentials in an encrypted environment file on every host and have services read the file locally at startup.

  • Develop a distinct custom authentication library for each microservice, using different encryption algorithms to diversify defenses.