ISC2 CISSP Practice Test
Certified Information Systems Security Professional
Use the form below to configure your ISC2 CISSP Practice Test. The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

ISC2 CISSP Information
The (ISC)² Certified Information Systems Security Professional (CISSP) exam is one of the most widely recognized credentials in the information security field. It covers an extensive body of knowledge related to cybersecurity, including eight domains: Security and Risk Management, Asset Security, Security Architecture and Engineering, Communication and Network Security, Identity and Access Management, Security Assessment and Testing, Security Operations, and Software Development Security. This broad scope is designed to validate a candidate’s depth and breadth of knowledge in protecting organizations from increasingly complex cyber threats.
Achieving a CISSP certification signals a strong understanding of industry best practices and the ability to design, implement, and manage a comprehensive cybersecurity program. As a result, the exam is often regarded as challenging, requiring both practical experience and intensive study of each domain’s key principles. Many cybersecurity professionals pursue the CISSP to demonstrate their expertise, enhance their credibility, and open doors to higher-level roles such as Security Manager, Security Consultant, or Chief Information Security Officer.
Scroll down to see your responses and detailed results
Free ISC2 CISSP Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 15
- Time: Unlimited
- Included Topics:Security and Risk ManagementAsset SecuritySecurity Architecture and EngineeringCommunication and Network SecurityIdentity and Access Management (IAM)Security Assessment and TestingSecurity OperationsSoftware Development Security
A security team needs to evaluate potential security flaws in its newly deployed web application before making it available to customers. Which of the following approaches would be the BEST first step in identifying potential vulnerabilities?
- You selected this option
Conduct a full-scale penetration test with a red team
- You selected this option
Implement a web application firewall
- You selected this option
Perform automated vulnerability scanning against the application
- You selected this option
Review the application's access control matrix
Answer Description
Automated vulnerability scanning is the best first step because it provides a systematic, comprehensive baseline assessment of potential security flaws with minimal disruption to the application. It efficiently identifies common vulnerabilities such as SQL injection, cross-site scripting (XSS), and misconfigurations before proceeding to more resource-intensive and targeted testing methods. The scan results will help prioritize further testing efforts and remediation activities. While the other options are valuable security practices, they either come later in the testing process or address different aspects of security management that wouldn't serve as the most efficient first step for identifying vulnerabilities in a newly deployed web application.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is automated vulnerability scanning?
What common vulnerabilities can automated vulnerability scanning detect?
Why is it important to conduct an automated vulnerability scan before other security assessments?
A company is implementing an identity integration solution to connect their internal directory services with multiple third-party SaaS applications. The security team requires that all authentication traffic between their systems and external service providers must remain within their corporate network boundary. Which approach would BEST meet this requirement?
- You selected this option
Implementing a credential caching system
- You selected this option
Deploying a cloud-based integration service
- You selected this option
Implementing a local identity proxy
- You selected this option
Configuring a token forwarding mechanism
Answer Description
The correct answer is implementing a local identity proxy. A local identity proxy (sometimes called an identity broker or federation server) is installed within the corporate network and serves as an intermediary between the company's internal identity provider (directory services) and external service providers. This approach allows authentication traffic to be contained within the organization's network boundary because the proxy handles all external communications while maintaining internal connections to the identity store.
The token forwarding mechanism option is incorrect because security tokens are typically sent directly from identity provider to service provider, which would mean authentication traffic would cross network boundaries.
The cloud-based integration service would actually move authentication traffic outside the company's network boundary, which directly contradicts the requirement.
Implementing a credential caching system by itself doesn't necessarily keep authentication traffic within the network boundary - it may reduce the frequency of authentication events but doesn't control where that traffic flows when authentication does occur.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a local identity proxy?
How does an identity proxy handle authentication traffic?
Why is it important to keep authentication traffic within the corporate network?
A data center manager is evaluating fire suppression systems for a newly constructed server room housing critical infrastructure. The primary concern is protecting expensive electronic equipment while ensuring rapid fire suppression with minimal cleanup. Which fire suppression system would be the BEST choice for this environment?
- You selected this option
Dry chemical system using sodium bicarbonate
- You selected this option
Carbon dioxide (CO2) flooding system
- You selected this option
Clean agent system using FM-200 or NOVEC 1230
- You selected this option
Traditional water sprinkler system with pre-action capability
Answer Description
The correct answer is a clean agent system using FM-200 or NOVEC 1230. Clean agent systems are specifically designed for protecting sensitive electronic equipment and valuable assets in data centers and server rooms. They extinguish fires by interrupting the combustion process at the chemical level without leaving residue that could damage electronics. They're safe for human exposure at design concentrations, and they don't conduct electricity, which makes them ideal for electrical fires in data centers.
The other options are less suitable:
Water sprinkler systems can cause significant water damage to electronic equipment and may lead to electrical hazards.
CO2 systems are effective but can be lethal to humans if discharged in occupied spaces, requiring extensive safety measures and evacuation protocols.
Dry chemical systems leave a residue that can damage sensitive electronic components and require extensive cleanup after discharge.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are clean agent systems, and how do they work?
Why are traditional water sprinkler systems not suitable for server rooms?
What are the advantages of using clean agents over CO2 systems?
A large multinational corporation is implementing a secure email system that requires messages to be digitally signed. The CISO wants to ensure the system provides strong non-repudiation capabilities. Which of the following best describes how digital signatures provide non-repudiation in this scenario?
- You selected this option
The digital signature is created using the sender's private key, which is under their control, making it difficult to deny sending the message
- You selected this option
The digital signature adds a trusted timestamp to each message that is validated by multiple third parties
- You selected this option
The digital signature encrypts the message content so that it can be decrypted by the intended recipient
- You selected this option
The digital signature requires a certificate authority to validate each transaction in real-time before delivery
Answer Description
Digital signatures provide non-repudiation because they use the sender's private key, which should be known only to the sender. Since the private key is under the sender's control, they cannot credibly deny having created the signature, thus establishing accountability. Recipients verify the signature using the sender's public key, confirming the message came from the claimed sender.
The incorrect options misrepresent how digital signatures work. They don't inherently include timestamps, don't encrypt the message (they encrypt a hash of the message), and don't require real-time CA validation for each transaction.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a digital signature and how does it work?
What does non-repudiation mean in cybersecurity?
What is the difference between a private key and a public key?
Under the General Data Protection Regulation (GDPR), which of the following rights allows covered individuals to request complete removal of their personal data from an organization's systems?
- You selected this option
Right to data portability
- You selected this option
Right to access
- You selected this option
Right to be forgotten
- You selected this option
Right to information
Answer Description
The 'right to be forgotten' (officially called the 'right to erasure' in Article 17 of GDPR) gives individuals the power to request the deletion of their personal data under certain circumstances. This right enables EU citizens and residents to request that organizations erase their personal data when there is no compelling reason for its continued processing. The right is not absolute and has certain exceptions such as when the data is needed for legal compliance, public health purposes, or establishing legal claims. The other options represent different data protection concepts but do not specifically address the complete removal of personal data from systems.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the circumstances under which the right to be forgotten can be exercised?
What are some exceptions to the right to be forgotten?
How does the right to data portability differ from the right to be forgotten?
A healthcare organization wants to implement an access control system that can make decisions based on the patient's relationship to the healthcare provider, time of day, location of access attempt, and sensitivity of the medical records. Which access control model would BEST meet these requirements?
- You selected this option
Mandatory Access Control (MAC)
- You selected this option
Role-based Access Control (RBAC)
- You selected this option
Discretionary Access Control (DAC)
- You selected this option
Attribute-based Access Control (ABAC)
Answer Description
Attribute-based Access Control (ABAC) is the correct answer because it evaluates access requests based on attributes of subjects (users), objects (resources), actions, and environmental conditions. In this healthcare scenario, ABAC can use multiple attributes like the relationship between provider and patient, time of access, location, and data sensitivity level to make dynamic access decisions.
Role-based Access Control (RBAC) would be insufficient as it primarily makes access decisions based on pre-defined roles and doesn't easily accommodate environmental conditions like time and location. Discretionary Access Control (DAC) relies on the resource owner to grant access rights and lacks the fine-grained control needed for multiple attributes. Mandatory Access Control (MAC) uses security labels and clearance levels in a rigid hierarchy, which doesn't allow for the contextual, relationship-based decisions required in this healthcare scenario.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the key features of Attribute-based Access Control (ABAC)?
How does ABAC differ from Role-based Access Control (RBAC)?
What are the limitations of other access control models like DAC and MAC in this context?
A security architect at a large enterprise is reviewing the network traffic patterns in their newly deployed private cloud environment. They notice that most of the traffic is occurring between application servers in the same data center. Which type of traffic flow is being observed, and what security approach would be most appropriate for protecting this traffic pattern?
- You selected this option
East-West traffic; implement micro-segmentation between application servers
- You selected this option
North-South traffic; implement Network Access Control (NAC) systems
- You selected this option
North-South traffic; strengthen perimeter firewalls at the data center edge
- You selected this option
East-West traffic; deploy additional VPN concentrators
Answer Description
The scenario describes East-West traffic, which refers to lateral movement between servers within the same data center. Unlike North-South traffic (which flows between the data center and external networks), East-West traffic requires different security approaches.
Micro-segmentation is most appropriate because it provides fine-grained security controls between workloads within the data center, limiting lateral movement if a server is compromised. Traditional perimeter defenses primarily protect North-South traffic but don't adequately secure server-to-server communications. Micro-segmentation creates security zones around workloads and controls communication between them, aligning with zero-trust principles.
Perimeter firewalls focus on North-South traffic, VPN concentrators secure remote access connections, and NAC systems control endpoint authentication rather than server-to-server communications.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is East-West traffic?
What is micro-segmentation and why is it important?
How does micro-segmentation align with zero-trust principles?
The JIT access model allows users to access permissions only when required for specific tasks, with permissions being immediately revoked after use.
- You selected this option
True
- You selected this option
False
Answer Description
Just In Time (JIT) minimizes the risk of unauthorized access by limiting the operational time of access privileges. It ensures that users have permissions only for the duration necessary to perform their activities, significantly lowering the exposure to potential threats. Conversely, traditional access mechanisms that provide continuous permissions can lead to higher risks should credentials be compromised, as they allow longer access than necessary.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does JIT stand for, and why is it important?
What are some real-world applications of the JIT access model?
How does JIT access compare to traditional access control models?
An employee expresses discomfort during a safety training about the presence of unknown individuals frequently seen in the workplace. What should management prioritize as the first course of action in response to this concern?
- You selected this option
Review and enhance security protocols concerning unknown individuals on the premises.
- You selected this option
Send out a reminder for employees to report any suspicious behavior without specific guidance.
- You selected this option
Organize a meeting to gather employee feedback while delaying action on the issue.
- You selected this option
Install additional security cameras to monitor the area without addressing personnel protocols.
Answer Description
The best initial step is to review and enhance existing security protocols. Focusing on clear procedures for identifying and managing unknown individuals can help create a safer environment. Although alternative actions might seem relevant, they do not directly address the urgent need to ensure employee safety through established guidelines.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are security protocols and why are they important?
How can organizations assess and enhance existing security protocols?
What should employees do if they see unknown individuals in the workplace?
A global financial institution has implemented a comprehensive disaster recovery plan and wants to validate their recovery procedures under real-world conditions before hurricane season begins. The CISO has requested a test that will provide maximum confidence in the organization's ability to recover critical systems. Which testing approach should be implemented?
- You selected this option
Full interruption test
- You selected this option
Parallel test
- You selected this option
Simulation test
- You selected this option
Tabletop exercise
Answer Description
The correct answer is a full interruption test. A full interruption test (also known as a cutover test) is the most comprehensive and realistic form of disaster recovery testing. In this approach, primary systems are completely shut down and operations are transferred to the recovery site, simulating an actual disaster scenario. While this provides the most accurate assessment of recovery capabilities, it also carries the highest risk and potential business impact, which is why it's typically conducted during planned downtime with extensive preparation.
A simulation test does not actually shut down production systems but rather tests recovery procedures in a simulated environment. A parallel test involves running systems at both the primary and recovery sites simultaneously to compare results. A tabletop exercise is discussion-based and doesn't involve actual system recovery operations. None of these alternatives provides the comprehensive validation of recovery procedures under realistic conditions that the CISO is seeking.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the key differences between a full interruption test and a simulation test?
Why is conducting a full interruption test considered high risk?
What are the main benefits of performing a full interruption test for disaster recovery?
A financial services company is experiencing issues with their web application where users are complaining that they have to re-authenticate multiple times during their workflow. The security team wants to implement a solution that maintains security while improving the user experience. Which session management approach would be MOST appropriate?
- You selected this option
Storing user credentials in browser cookies for automatic re-authentication
- You selected this option
Using IP address tracking to maintain user sessions
- You selected this option
Implementing session tokens that are valid until the user logs out
- You selected this option
Implementing session tokens with longer timeout values
Answer Description
The correct answer is implementing session tokens with appropriate timeout values. Session tokens provide a secure way to maintain a user's authenticated state across multiple requests without requiring re-authentication for each interaction. By setting appropriate timeout values (neither too short nor too long), the organization balances security with usability.
The other options have significant issues:
- Long-lasting sessions with extended expiration would create a security vulnerability by maintaining authentication for excessive periods
- Storing credentials in browser cookies would expose authentication information in an insecure manner
- IP-based session tracking is problematic because many users might share the same IP address (especially with NAT) or a legitimate user's IP might change during a session (mobile users)
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are session tokens and how do they work?
What are timeout values in session management?
Why is storing user credentials in browser cookies considered insecure?
What is the key purpose of assessing the disaster recovery outcomes after an incident?
- You selected this option
To ensure staff are trained on key systems
- You selected this option
To identify areas for improvement in recovery processes
- You selected this option
To restore systems to their original state
- You selected this option
To determine which systems need redundancy
Answer Description
Assessing the disaster recovery plan outcomes after an incident is essential to identify areas of improvement and ensure that procedures are effective and efficient for future incidents. This assessment allows organizations to understand what worked well and what did not, informing better practices and enhancing overall resilience. Other options do not focus on the evaluation aspect, which is critical for ongoing improvement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is it important to identify areas for improvement in disaster recovery processes?
What methods are commonly used to assess disaster recovery outcomes?
How can organizations ensure their disaster recovery plans remain effective over time?
A global financial company is reviewing its data retention policies. The Chief Information Security Officer wants to ensure the organization is implementing retention periods that minimize both legal risk and storage costs. Which of the following approaches represents the BEST strategy for data retention policy development?
- You selected this option
Retain data for a seven-year period where necessary to simplify compliance management
- You selected this option
Delete data promptly after transaction completion to minimize storage and security costs
- You selected this option
Retain data as long as needed to ensure availability for future business intelligence and legal discovery
- You selected this option
Develop a data classification scheme with retention periods based on legal requirements, business needs, and industry regulations
Answer Description
The correct approach is to develop a data classification scheme with retention periods based on legal requirements, business needs, and industry regulations. This creates a tailored framework that properly categorizes data and applies appropriate retention periods to each category. This balanced approach minimizes both legal risk (by ensuring compliance with retention requirements) and storage costs (by not keeping unnecessary data longer than required).
The seven-year retention period ignores varying requirements across data types and jurisdictions. Deleting data promptly after transactions fails to meet record-keeping requirements and loses business intelligence value. Retaining data as long as needed is too vague and potentially violates data minimization principles in regulations like GDPR while increasing storage costs and complicating information governance.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a data classification scheme?
Why are retention periods important for compliance?
What are the implications of not following a data retention policy?
A security incident has occurred at your organization involving unauthorized access to sensitive customer data. As the lead security investigator, you have collected evidence from various systems and are now preparing your final investigation report. Which of the following elements is MOST important to include in your documentation?
- You selected this option
Screenshots of system logs without timestamps
- You selected this option
Recommendations for disciplinary actions against employees
- You selected this option
Personal opinions about who should be held responsible
- You selected this option
Chain of custody documentation for collected evidence
Answer Description
The chain of custody documentation is the most important element to include in the investigation report. Chain of custody provides a chronological paper trail that shows how evidence was collected, analyzed, transferred, and preserved. This documentation is crucial for maintaining the integrity and admissibility of evidence in potential legal proceedings. Without proper chain of custody documentation, evidence may be deemed inadmissible in court due to questions about its integrity.
The other options, while important in various contexts, are not as critical as chain of custody documentation:
- Personal opinions about culpability may introduce bias into the investigation report and should be avoided in favor of factual findings.
- Screenshots without timestamps lack verification of when they were obtained and could be challenged.
- Recommendations for disciplinary actions are typically not part of an investigation report but would be addressed separately by management or HR based on the findings.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is chain of custody documentation, and why is it important?
What could happen if chain of custody is not properly documented?
How should evidence handling and documentation be maintained during an investigation?
A cybersecurity incident response team at a financial institution has discovered a compromised employee workstation. Digital evidence needs to be collected for potential legal action. What should be the FIRST step in collecting digital evidence from the compromised workstation?
- You selected this option
Document visible running processes and applications
- You selected this option
Begin examining files directly on the live system
- You selected this option
Create a forensic image of the storage media
- You selected this option
Shut down the system promptly to preserve evidence
Answer Description
The correct approach is to create a forensic image (bit-by-bit copy) of the storage media before beginning any analysis. This preserves the original state of the evidence and ensures that the original evidence isn't altered during examination. A forensic image captures all data on the device, including deleted files and slack space. This approach maintains the chain of custody and ensures that the evidence remains admissible in court. Direct examination of the live system risks altering timestamps, file metadata, and other crucial forensic artifacts. Similarly, shutting down the system could lose volatile memory (RAM) data which might contain valuable evidence about the attack. Documenting the system state is important but should follow proper imaging procedures, not precede them.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a forensic image and why is it important in digital evidence collection?
What are the risks of examining files directly on a live system?
What is meant by the 'chain of custody' and why is it important?
Gnarly!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.