ISC2 Certified Secure Software Lifecycle Professional (CSSLP) Practice Test
Use the form below to configure your ISC2 Certified Secure Software Lifecycle Professional (CSSLP) Practice Test. The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

ISC2 Certified Secure Software Lifecycle Professional (CSSLP) Information
What is the CSSLP Certification
The Certified Secure Software Lifecycle Professional (CSSLP) from ISC2 validates that a software professional can integrate security best practices into every phase of the development life cycle. While many security credentials focus on infrastructure or operations, CSSLP zeroes in on building security in from the first requirements workshop through retirement of an application. Holding the certification signals to employers and customers that you can help reduce vulnerabilities, meet compliance mandates, and ultimately ship more resilient software.
How the Exam Is Structured
The current CSSLP exam is a computer-based test containing 125 multiple-choice questions delivered over a three-hour session. A scaled score of 700 out of 1,000 is required to pass. Content is distributed across eight domains that mirror the secure software development life cycle: 1) Secure Software Concepts, 2) Secure Software Requirements, 3) Secure Software Architecture & Design, 4) Secure Software Implementation, 5) Secure Software Testing, 6) Secure Lifecycle Management, 7) Secure Software Deployment, Operations & Maintenance, and 8) Secure Software Supply Chain. Because any topic in these domains is fair game, candidates need both breadth and depth of knowledge across process models, threat modeling, secure coding, DevSecOps pipelines, and supply-chain risk management.
The Power of Practice Exams
One of the most effective ways to close a knowledge gap and build exam-day confidence is to take high-quality practice exams. Timed drills acclimate you to the three-hour pacing and help you learn how long you can spend on each question before moving on. Equally important, comprehensive explanations (not just answer keys) reveal why a particular choice is correct, which deepens conceptual understanding and highlights recurring exam patterns. Aim to review every explanationâeven the questions you answer correctlyâto reinforce core principles and discover alternate ways a concept can be tested. Track scores over multiple attempts; trending upward is a reliable indicator that your study plan is working.
Preparation Tips
Begin your study schedule at least eight to twelve weeks out, mapping the official ISC2 exam outline to specific learning resources such as the (ISC)ÂČ CSSLP CBK, OWASP documentation, and language-specific secure-coding references. After youâve covered each domain, fold in practice exams and use their analytics to guide targeted review sessions. In the final two weeks, simulate the exam environment: mute notifications, sit for a full three-hour block, and practice reading every question twice before locking in an answer. Coupled with real-world experience and a disciplined study routine, these strategies position you to walk into the testing centerâand out with the CSSLP credentialâon your first attempt.

Free ISC2 Certified Secure Software Lifecycle Professional (CSSLP) Practice Test
- 20 Questions
- Unlimited time
- Secure Software ConceptsSecure Software Lifecycle ManagementSecure Software RequirementsSecure Software Architecture and DesignSecure Software ImplementationSecure Software TestingSecure Software Deployment, Operations, MaintenanceSecure Software Supply Chain
Free Preview
This test is a free preview, no account required.
Subscribe to unlock all content, keep track of your scores, and access AI features!
During a code-review of an analytics platform, you discover that every data analyst logs in through a single database account with read permission to all schemas. Which change would most directly apply the need-to-know aspect of least privilege for these analysts?
Issue each analyst a personal account restricted to only the specific schemas they require and set the grants to expire after the project milestone.
Move the production database to a read-only replica that all analysts can query, keeping the shared account.
Enable transparent data encryption on every tablespace while continuing to use the shared account for access.
Require analysts to justify access in the ticketing system but still authenticate through the common account.
Answer Description
Need-to-know limits access to the exact information a person requires and no more. Issuing each analyst a unique account that is restricted to only the schemas they actively support enforces that principle; it prevents them from viewing data outside their job scope and permits easy revocation when their role or project ends. Simply moving data to a replica, adding encryption, or requiring justification without changing the overly broad shared credential all leave universal access in place, so they do not satisfy need-to-know.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the principle of least privilege?
Why is logging in with shared accounts a security risk?
How do schema-level permissions enhance data security?
Your SaaS platform runs a stateless REST API on container clusters in a single public cloud availability zone. Management asks for greater resiliency so that customer requests continue with minimal disruption if the zone becomes unavailable. Which architectural change best satisfies this requirement while aligning with high-availability design principles?
Schedule nightly database and filesystem backups to object storage in a different region for disaster recovery.
Enable horizontal pod auto-scaling inside the existing zone to add containers when CPU usage peaks.
Place the API behind a global CDN to cache responses and reduce latency for end users.
Deploy identical API clusters in a second availability zone and load-balance traffic across both zones using health checks.
Answer Description
Running identical, health-checked instances of the API in at least two availability zones and distributing traffic across them creates an active-active topology. Because requests can be served by either zone at any moment, the loss of one zone is masked by automatic failover at the load balancer, keeping service interruption to only the brief time needed to detect the outage. Horizontal auto-scaling in one zone mitigates load spikes but not zone failure. Nightly backups help with disaster recovery, yet restoring from them can take hours and does not keep the service running. A CDN may cache static content, but dynamic API calls still rely on the origin and will fail if the sole zone is down.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an availability zone in cloud computing?
What does it mean for a system to be stateless, and why is it important for REST APIs?
How does a health check contribute to high availability in cloud architectures?
During a build pipeline, you need to confirm that a 50 KB JSON configuration file checked into source control has not been tampered with before it is packaged into the container image. Which approach provides the most reliable, automated proof of the file's integrity?
Encrypt the file with AES-256 and verify that decryption succeeds before packaging.
Store the file on a RAID 1 volume to guarantee bit-level consistency.
Compute a SHA-256 digest of the file during each run and compare it to the previously stored baseline hash.
Compress the file with gzip and compare the resulting archive size to yesterday's build.
Answer Description
A cryptographic hash such as SHA-256 produces a fixed-length digest that changes unpredictably if even one bit of the file is altered. Storing a baseline digest and recomputing it during each build allows an automated comparison that will immediately reveal unauthorized modification. Encrypting the file only hides its contents and does not detect changes; successful decryption simply proves the key was correct. RAID 1 protects against disk failure, not deliberate or accidental file edits. Comparing compressed file sizes is unreliable because different content can produce the same size, and gzip does not provide integrity assurance.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is SHA-256 and why is it used for file integrity?
Why is encryption like AES-256 not appropriate for detecting file tampering?
What are the limitations of RAID 1 and gzip for file integrity verification?
During a secureâupdate project, you must ensure that desktop clients can verify both the origin and the untampered state of each new executable they download from the company's server. Which control is the most effective way to meet this requirement?
Apply code obfuscation to make the binary harder to analyze
Digitally sign each release with a trusted code-signing certificate and have clients verify the signature before execution
Publish a separate SHA-256 checksum that users can compare after downloading
Distribute the files over HTTPS to prevent interception during download
Answer Description
Code signing applies a digital signature-created with the publisher's private key-to the hash of an executable. When clients download the file, their systems use the corresponding public certificate to validate the signature. If the file has been altered or the signer is untrusted, verification fails, preventing execution. Transport encryption (e.g., HTTPS) protects data only while in transit, hashes published on a web page do not prove who created the code, and obfuscation merely complicates reverse engineering without assuring integrity or authenticity.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is code signing and how does it ensure the integrity of an executable?
How is verifying a SHA-256 checksum different from code signing?
Why doesn’t HTTPS alone guarantee the authenticity of executable files?
Your organization hosts a public REST API on a single virtual machine in one cloud availability zone. Load testing shows CPU saturation during peak usage, and planned reboots cause several minutes of outage. To align with the availability principle of redundancy, which architectural change will most effectively increase service uptime?
Deploy identical API instances in multiple availability zones behind an auto-scaling load balancer.
Enable host-based firewall rules that rate-limit all incoming requests above a defined threshold.
Upgrade the existing virtual machine to a larger compute class with more CPU and memory.
Schedule daily encrypted snapshots of the virtual machine to remote object storage.
Answer Description
Placing the API behind a load balancer and running identical instances in at least two availability zones introduces both horizontal scaling and geographic redundancy. If one instance fails or a zone becomes unavailable, traffic is automatically routed to healthy nodes, maintaining uptime and eliminating the single point of failure. Simply upgrading the VM adds capacity but still leaves one host and one zone vulnerable. Daily snapshots protect data but do not keep the service running when the VM is down. Host-based rate limiting may reduce abuse but does not address component failure or saturation, so overall availability remains largely unchanged.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a load balancer, and how does it increase redundancy?
What are availability zones in cloud environments?
How does auto-scaling improve service availability?
During a forensic investigation of an unauthorized database dump, the response team discovers that every application node keeps its audit records in plain-text files on the local disk and developers can alter or delete those files at will. Which control, if it had been implemented, would have most directly preserved accountability for the actions that led to the breach?
Scheduling weekly differential backups of the application servers
Requiring multi-factor authentication for all privileged user accounts
Encrypting local audit files with the application's TLS certificate
Centralized, write-once log collection stored on a server where only security administrators have append-only rights
Answer Description
Accountability relies on the ability to attribute each action to a specific identity and to prove that the recorded evidence has not been altered. Storing logs locally, where the same users who generated the events can modify or erase them, breaks that chain of evidence. A centrally managed, write-once (immutable) logging solution with tightly restricted administrative access protects the integrity and availability of audit records, allowing investigators to trace events back to responsible parties. Encrypting audit files adds confidentiality but does not stop authorized users from deleting or rewriting them. Weekly server backups and multi-factor authentication are valuable practices, yet neither specifically ensures that log evidence remains tamper-proof and attributable throughout the system's life cycle.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is centralized, write-once log collection essential for preserving accountability?
What does 'append-only rights' mean and how do they prevent log tampering?
How does encrypting audit logs differ from centralized logging for accountability?
Your team is refactoring a monolithic payment application into small containerized microservices deployed in separate subnets across two cloud regions. From a defense in depth perspective, which benefit of this distributed design most directly reduces risk if the checkout microservice is breached?
Blue/green deployments guarantee uninterrupted availability during version rollouts.
Internal traffic can stay unencrypted, eliminating the overhead of TLS inside the cluster.
Horizontal auto-scaling lets the platform absorb sudden spikes in legitimate or malicious traffic without manual action.
Isolation of each function limits lateral movement through service-level identities and network policies.
Answer Description
Breaking a monolith into independently deployed microservices allows each component to run with its own identity, network segment, and access policies. When one service is compromised, those boundaries limit an attacker's ability to pivot to other data or functions, thereby shrinking the blast radius and adding another layer to the overall security stack. The other choices describe useful operational characteristics-unencrypted internal traffic, blue/green deployments, and auto-scaling-but they do not directly provide an additional security layer that contains a successful attack.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does 'defense in depth' mean in cybersecurity?
What are microservices, and how are they different from a monolithic architecture?
Why is limiting lateral movement important in cybersecurity?
A Linux microservice must accept HTTP connections on port 80, then parse user-supplied files. To honor the runtime least-privilege principle, which implementation approach is most appropriate?
Run the entire service as root but restrict outbound traffic with iptables rules.
Start as root solely to bind to port 80, then immediately setuid to a non-privileged service account before handling any requests.
Launch the service as an unprivileged user and use sudo each time it needs to write its log file.
Execute the microservice as root inside a Docker container, relying on container isolation for protection.
Answer Description
Binding to a port below 1024 requires root (or the CAP_NET_BIND_SERVICE capability). Once the privileged action is complete, continuing execution as root increases the blast radius of any flaw in the file-parsing logic. Dropping privileges immediately after binding-by calling setuid() to switch to a dedicated, unprivileged service account-limits what the process can do if it is compromised. Firewalls, containers, or intermittent sudo do not remove the broader privileges held by the running process and therefore do not satisfy the requirement as effectively.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the least-privilege principle in security?
What does setuid() do in Linux?
Why do you need root privileges to bind to port 80 in Linux?
During a security requirements workshop for a European e-commerce application that will store customer details and cardholder data, a developer asks how GDPR and PCI DSS differ in their authority. Which statement correctly identifies the governing source for each set of requirements?
Both GDPR and PCI DSS are voluntary frameworks with no contractual or legal penalties for non-compliance.
PCI DSS governs all personal data across Europe, and GDPR applies only to cardholder data handled worldwide.
GDPR is an industry guideline without legal force; PCI DSS is US federal law enforced by regulators.
GDPR is binding EU legislation, while PCI DSS is an industry standard defined by payment card brands.
Answer Description
GDPR is Regulation (EU) 2016/679, an enforceable law passed by the European Union and applicable to any organization processing the personal data of EU residents. PCI DSS, by contrast, is an industry standard created and contractually enforced by the major payment card brands through the PCI Security Standards Council. Non-compliance with GDPR can lead to regulatory fines, whereas failure to meet PCI DSS can result in loss of card-processing privileges and other contractual penalties. The other statements either reverse these roles, mislabel both frameworks as voluntary, or misstate their scope.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is GDPR and how is it enforced?
What are the key components of PCI DSS?
How do GDPR and PCI DSS differ in scope?
During a production release, your organization wants to enforce segregation of duties through multi-party control. Which of the following practices BEST meets this goal?
Developers are blocked from viewing production logs unless they open a support ticket.
The CI/CD pipeline automatically deploys code to production once automated tests succeed.
A single release manager uses their personal hardware security token to sign and deploy the build.
Two engineers must independently authenticate to reveal separate portions of the production signing key before code can be signed.
Answer Description
Multi-party control requires more than one individual to authorize or perform a critical action. Requiring two engineers to supply separate authentications that each reveal part of the signing key enforces dual control (split knowledge), meaning no single person can sign and release code alone. A sole release manager with a hardware token, automated deployment without human approval, or restricting access to production logs do not involve multiple people jointly authorizing the critical signing action and therefore do not satisfy multi-party control for segregation of duties.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is multi-party control, and why is it important in production releases?
What is dual control or split knowledge in security practices?
How does enforcing segregation of duties protect the software development lifecycle?
An auditor reviewing your software release process notes that compiled installers are distributed to customers via an HTTPS-protected portal. Customers often move the files onto isolated production networks before installation. Which security principle is still insufficiently protected, and what control would best mitigate the gap?
Encrypt the binaries with AES-256 before storing them on the portal to strengthen confidentiality.
Digitally sign each release package with the organization's private code-signing certificate to preserve integrity.
Require mutual TLS with client certificates for all customer downloads to enhance authentication.
Replicate the download portal across multiple geographic regions using a CDN to improve availability.
Answer Description
HTTPS already provides confidentiality and integrity while the files are in transit, and authenticated access to the portal establishes who initiated the download. Once the binaries leave the portal, however, customers have no cryptographic proof that they remain unaltered or truly originated from the publisher. That open issue concerns the principle of integrity (with authenticity and non-repudiation as collateral benefits). Applying a digital code-signing certificate to each release embeds a signature created with the publisher's private key. Customers can later validate the signature with the corresponding public key-even on an offline or air-gapped network-to confirm the software has not been modified. Strengthening portal authentication, storing the files encrypted, or adding redundancy helps other principles (confidentiality, availability) but does not allow customers to detect post-download tampering.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a code-signing certificate, and how does it ensure integrity?
How can customers verify the authenticity of code-signed software on air-gapped or offline networks?
What are the benefits of using HTTPS for file delivery, and why is it insufficient for integrity?
Your e-commerce platform runs two application servers in an active-passive cluster behind a health-checking load balancer. During a resilience test you hard-stop the active node; traffic is quickly redirected but customers are forced to log in again and open carts are lost. Which change would most effectively increase service availability during such failovers?
Increase the load balancer's health-check interval to reduce the likelihood of unnecessary failovers.
Move the passive node to a different network segment to add geographic diversity.
Replace the load balancer with round-robin DNS to distribute connections across both servers.
Enable stateful session replication (or a shared session store) so both nodes maintain identical user sessions in real time.
Answer Description
The disruption occurs because user and transaction state exists only in the memory of the failed node. Enabling stateful session replication (or a shared session store) copies each user's session data to the standby server in real time. When the primary node goes offline, the remaining cluster member already holds current session information, so users continue without interruption. Lengthening health-check intervals would delay detection, round-robin DNS lacks failure awareness, and relocating the passive node improves fault isolation but does not preserve in-memory session state. Only session replication directly eliminates the observed logouts and data loss, thereby enhancing high availability through seamless node takeover.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is stateful session replication?
Why does increasing the load balancer's health-check interval not solve the issue?
How does a shared session store work compared to replication?
Your development team publishes nightly builds of a CLI tool and a separate text file containing the SHA-256 checksum for each binary. Before installation, users are told to calculate the checksum locally and compare it to the published value. Which security objective is primarily being addressed?
Non-repudiation by providing verifiable proof of the publisher's identity
Availability of the binary by enabling redundant download sources
Confidentiality of the binary by ensuring it cannot be read in transit
Integrity of the binary by detecting any unauthorized modification
Answer Description
A cryptographic hash such as SHA-256 produces a unique, fixed-length digest of the binary. Any bit-level change-whether accidental corruption or malicious tampering-will create a different digest, so comparing the locally calculated value to the one supplied by the publisher detects alteration. This protects the integrity of the software. Hashes alone do not hide the contents (confidentiality), prove who published the file (non-repudiation or authenticity), or make the file more accessible (availability).
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a cryptographic hash function like SHA-256?
How does checksum verification protect software integrity?
Why is SHA-256 used instead of simpler checksum methods?
During threat modeling, your team identifies that the order-processing microservice depends on an external OAuth token validator. To satisfy the fail secure design principle, how should the microservice behave if the validator becomes unreachable at run time?
Reject every incoming request until the validator is reachable again, returning an authorization failure.
Accept requests that present tokens found in a five-minute cache while logging a warning.
Queue the requests and complete them after connectivity is restored, skipping a second token check.
Temporarily bypass authentication but record detailed audit logs for later review.
Answer Description
Fail secure (fail-safe) means the system should default to the most secure state when a required component fails. Rejecting every request that cannot be positively authenticated ensures no unauthorized transactions occur. Allowing cached tokens, disabling authentication, or processing queued requests without revalidation all leave paths for compromise and violate fail secure requirements.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does 'fail secure' design principle mean?
How does OAuth function in authentication processes?
Why is caching tokens risky in fail secure scenarios?
During a quarterly security audit of your cloud IAM, you discover that several service accounts have accumulated legacy permissions unrelated to their current functions. Which corrective measure most directly addresses the problem of excessive entitlements?
Enable server-side encryption of all storage buckets and rotate the encryption keys annually.
Replicate the source-code repository to a second region to improve disaster recovery capabilities.
Run scheduled access-certification campaigns that require managers to review and explicitly approve or revoke every entitlement for each account.
Enforce VPN and multifactor authentication for all service accounts before they can access the environment.
Answer Description
Excessive or outdated permissions are an entitlement management issue. Periodic access-certification (attestation) campaigns force resource owners or managers to review every permission granted to each identity and revoke those no longer needed, keeping entitlements aligned with job duties. Multifactor logins, encryption, and repository replication all improve security or availability in other areas, but they do not reduce unnecessary privileges that have already been granted.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an access-certification campaign?
Why are excessive entitlements considered a security risk?
How do managers decide which entitlements to revoke during a review?
Your team is designing an API for an electronic health record system hosted in a cloud environment. All users authenticate via SAML SSO. The security policy states that any licensed physician may read any patient record, but only the record's attending physician may modify it. Which authorization model best satisfies this requirement at the API layer?
Role-Based Access Control (RBAC) with static physician and nurse roles
Attribute-Based Access Control (ABAC) policies evaluated by the API gateway
Mandatory Access Control (MAC) that labels each patient record with a fixed sensitivity level
Discretionary Access Control (DAC) allowing physicians to maintain access control lists on their patients' records
Answer Description
Attribute-Based Access Control evaluates attributes about the subject (e.g., role=physician, userID), the object (e.g., attendingPhysicianID on the record), and the requested action (read or write). A policy can therefore permit read access to all users whose role attribute equals physician while restricting write access to those whose userID matches the record's attendingPhysicianID. Traditional role-based access control cannot easily express this per-record condition without creating an explosion of roles, discretionary access control would rely on individual users to manage ACLs, and mandatory access control uses fixed classification labels that do not reflect dynamic doctor-patient assignments. Hence, ABAC is the most appropriate choice.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Attribute-Based Access Control (ABAC)?
How does SAML SSO integrate with ABAC in API designs?
Why is ABAC better than RBAC for this healthcare scenario?
During a threat modelling workshop, the business owner states that after a critical outage the application must be fully operational again within two hours, and no more than ten minutes of transactional data may be lost. Which pair of non-functional continuity metrics should the security architect document to capture these requirements?
Maximum Tolerable Downtime (MTD) and Service Level Objective (SLO)
Mean Time Between Failures (MTBF) and Mean Time To Repair (MTTR)
Mean Time To Failure (MTTF) and Mean Time To Detect (MTTD)
Recovery Time Objective (RTO) and Recovery Point Objective (RPO)
Answer Description
The maximum acceptable time to restore service is captured by the Recovery Time Objective (RTO), while the maximum allowable period of data loss is captured by the Recovery Point Objective (RPO). Both metrics belong in the non-functional continuity section of the software security requirements. MTBF and MTTR describe reliability and repair metrics, MTD with SLO mixes disaster tolerance with performance targets, and MTTF with MTTD address failure occurrence and detection time. None of those pairs simultaneously conveys the allowed downtime and data-loss window required by the business owner.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Recovery Time Objective (RTO)?
What is the Recovery Point Objective (RPO)?
How are RTO and RPO different from MTBF and MTTR?
Your SaaS platform runs a stateless REST API on two cloud VMs behind a load balancer. Marketing drives unpredictable traffic surges that overload the servers and cause 503 errors. The company uses pay-as-you-go billing and wants minimal operations effort. Which architecture change most improves availability through scalability?
Replace the two VMs with one larger VM that has double the CPU and memory
Create an auto-scaling group that launches extra VM instances when average CPU exceeds a defined threshold
Enable strict rate limiting on the load balancer to drop requests above a set threshold
Schedule nightly image backups of the existing VMs to an off-site repository
Answer Description
Automatically adding or removing additional server instances based on real-time load is a horizontal scaling technique. When traffic spikes, new instances are created so capacity grows with demand, keeping the service available; when demand drops, instances are terminated, controlling cost and operations effort. Backups enhance recoverability but do not address peak load. Rate limiting protects the servers but intentionally rejects excess requests, reducing availability. Moving to a larger single VM is vertical scaling and still leaves a single point of failure; capacity is fixed once it reaches that VM's limits.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is horizontal scaling in cloud architecture?
How does auto-scaling work in a pay-as-you-go cloud environment?
Why is vertical scaling less effective at handling traffic surges?
While planning the audit subsystem for a new payment-processing microservice, the security architect must ensure investigators can later reconstruct a precise sequence of user actions for accountability purposes. Which control MOST directly supports this requirement?
Anonymize user identifiers in logs before forwarding them to the SIEM.
Compress and archive all logs to offline storage after 24 hours.
Purge high-volume debug logs daily to conserve local disk space.
Synchronize every host and container to a trusted time source and timestamp each log entry.
Answer Description
Accurate, synchronized timestamps let investigators place events from different components on a single, trusted timeline. NIST SP 800-92 notes that without trustworthy time information, log analysis cannot determine the order or relationship of actions-undermining accountability. Compressing or deleting logs affects retention, not chronology. Masking user IDs removes attribution data and actually hinders investigations.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is synchronizing hosts and containers with a trusted time source important for logging?
What is NIST SP 800-92, and how does it relate to log analysis?
How does anonymizing user identifiers in logs hinder accountability and investigations?
A DevSecOps team must enable employees to access five internal microservices. To apply the economy of mechanism principle while also reducing password fatigue for users, which authentication approach is MOST appropriate?
Require users to maintain separate usernames and passwords for every microservice but enforce identical password complexity rules.
Integrate all services with a centralized SAML/OIDC-based single sign-on service provided by a well-vetted identity provider.
Store each user's credentials in an encrypted environment file on every host and have services read the file locally at startup.
Develop a distinct custom authentication library for each microservice, using different encryption algorithms to diversify defenses.
Answer Description
Economy of mechanism favors the simplest security design that still meets requirements. Reusing a single, battle-tested identity provider to supply SAML or OIDC tokens lets each microservice rely on the same straightforward mechanism, avoids duplicating complex authentication code, and gives users one credential set (SSO). Writing separate custom modules, storing credentials in local files, or forcing unique logins for every service all add unnecessary components or credentials, increasing attack surface and violating the simplicity goal.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is SAML and OIDC?
How does Single Sign-On (SSO) reduce password fatigue?
Why is the economy of mechanism principle important in security design?
Wow!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.