ISC2 CISSP Practice Test
Certified Information Systems Security Professional
Use the form below to configure your ISC2 CISSP Practice Test. The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

ISC2 CISSP Information
The (ISC)² Certified Information Systems Security Professional (CISSP) exam is one of the most widely recognized credentials in the information security field. It covers an extensive body of knowledge related to cybersecurity, including eight domains: Security and Risk Management, Asset Security, Security Architecture and Engineering, Communication and Network Security, Identity and Access Management, Security Assessment and Testing, Security Operations, and Software Development Security. This broad scope is designed to validate a candidate’s depth and breadth of knowledge in protecting organizations from increasingly complex cyber threats.
Achieving a CISSP certification signals a strong understanding of industry best practices and the ability to design, implement, and manage a comprehensive cybersecurity program. As a result, the exam is often regarded as challenging, requiring both practical experience and intensive study of each domain’s key principles. Many cybersecurity professionals pursue the CISSP to demonstrate their expertise, enhance their credibility, and open doors to higher-level roles such as Security Manager, Security Consultant, or Chief Information Security Officer.

Free ISC2 CISSP Practice Test
- 20 Questions
- Unlimited
- Security and Risk ManagementAsset SecuritySecurity Architecture and EngineeringCommunication and Network SecurityIdentity and Access Management (IAM)Security Assessment and TestingSecurity OperationsSoftware Development Security
A security architect at a financial services company is designing the access control mechanism for a new collaborative research platform. The platform must allow data analysts, who create proprietary research documents, to independently manage permissions and share their work with specific colleagues on their project team. Which access control model BEST supports this requirement of owner-managed permissions?
Role-Based Access Control (RBAC)
Mandatory Access Control (MAC)
Rule-Based Access Control
Discretionary Access Control (DAC)
Answer Description
Discretionary Access Control (DAC) is the correct model because it allows the owner of a resource to have the discretion to grant or deny access to other subjects. In the given scenario, the data analysts are the owners of the documents they create and need to manage permissions themselves, which is the defining characteristic of DAC. Mandatory Access Control (MAC) is incorrect because access is determined by a central authority based on security labels, not the owner's choice. Role-Based Access Control (RBAC) is also incorrect as permissions are assigned based on a user's role within the organization, which may not align with the specific sharing needs for a given project.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Discretionary Access Control (DAC)?
How does DAC differ from Mandatory Access Control (MAC)?
What are potential risks of using DAC?
An organization wants to implement a solution that will verify endpoint security posture before granting network access. The solution should check for up-to-date antivirus, patch levels, and host firewall status before allowing devices to connect to the corporate network. What is the BEST technology to address this requirement?
Network Access Control (NAC)
Network segmentation
Virtual Private Network (VPN)
Intrusion Detection and Prevention System (IDS/IPS)
Answer Description
Network Access Control (NAC) is the correct solution because it specifically addresses the requirement to verify endpoint security posture before allowing network access. NAC solutions perform health checks on connecting devices to ensure they meet security requirements such as having up-to-date antivirus, proper patch levels, and enabled host firewalls. Once verified, NAC systems can dynamically assign appropriate access policies.
VPN would provide secure remote access but lacks the endpoint health verification capabilities described in the requirement. While IDS/IPS can detect and prevent attacks, they don't typically verify endpoint security posture before granting network access. Network segmentation is a valuable security strategy but doesn't inherently include the capability to check endpoint security status before allowing network connections.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does NAC enforce endpoint health checks?
How is NAC different from VPN in terms of functionality?
What role does 802.1X play in NAC solutions?
What is the primary security concern unique to distributed systems compared to centralized systems?
Increased attack surface due to multiple processing nodes
Authentication of users to single sign-on systems
Data encryption at rest requirements
Password complexity management
Answer Description
The correct answer is increased attack surface due to multiple processing nodes. Distributed systems by definition spread processing and data across multiple physical or virtual nodes, which inherently increases the attack surface compared to centralized systems. Each node in a distributed system represents a potential entry point for attackers, and securing all communication paths between nodes becomes more complex. This expanded perimeter requires additional security controls and coordination across the system. The other answers represent security concerns that may exist in both distributed and centralized systems, but they are not uniquely characteristic of distributed systems.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why does a distributed system have a larger attack surface compared to a centralized system?
How can communication security between nodes in a distributed system be ensured?
What are some examples of additional security controls for distributed systems?
During a routine penetration test, your security team discovers a previously unknown zero-day vulnerability in a widely used enterprise software platform deployed throughout your organization. The flaw permits unauthenticated remote code execution on affected servers. Although the team has created a temporary mitigation, it has not yet been rolled out to every system. Which disclosure strategy BEST adheres to responsible and ethical practices?
Publish technical details of the vulnerability on security blogs and social media to warn users of the software
Notify the vendor privately with technical details and allow them time to develop a patch before public disclosure
Apply a mitigation to your systems and keep the vulnerability information within your organization
Report the vulnerability to regulatory authorities and then contact the vendor
Answer Description
The correct answer is to notify the vendor privately with technical details and allow them time to develop a patch before public disclosure. This follows responsible disclosure principles that balance the need to protect users while giving vendors an opportunity to address the vulnerability. The ethical disclosure process typically includes privately notifying the vendor with technical details, giving them time to develop and test a fix (often 30-90 days depending on severity), and coordinating public disclosure after a patch is available. This approach minimizes risk to all users of the affected software while ensuring the vulnerability gets addressed.
The other options are problematic: Publishing technical details immediately before a patch exists puts all users at risk. Keeping the vulnerability secret and only applying your mitigation leaves other organizations vulnerable. Reporting to regulatory authorities and then contacting the vendor may delay remediation and does not follow standard coordinated disclosure workflows.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a zero-day vulnerability?
What are responsible disclosure principles?
Why is immediate public disclosure problematic for vulnerabilities?
A security professional is advising executives who frequently travel internationally with sensitive company data. Which of the following represents the BEST travel security practice regarding their laptops?
Back up all data to cloud storage before departure
Use clean or loaner devices with minimal data required for the trip
Install advanced encryption on personal devices before traveling
Register devices with local embassies at the destination
Answer Description
The best practice when traveling internationally with sensitive data is to use clean or loaner devices that contain minimal data required for the trip. This approach minimizes the risk exposure if the device is lost, stolen, or compromised through customs inspection or other means. It's a fundamental travel security practice that protects both the sensitive data and the organization's network. Using loaner devices means that even if the device is compromised, the impact is limited since it doesn't contain unnecessary sensitive information and isn't the employee's primary work device that regularly connects to the corporate network.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why are clean or loaner devices considered the best practice for international travel?
What are some risks associated with using personal devices while traveling internationally?
How do customs inspections pose a risk to laptops with sensitive data?
During a corporate security incident investigation, a security analyst needs to create an exact duplicate of a suspect's hard drive for forensic analysis. Which of the following approaches is the BEST choice for maintaining evidence admissibility?
Copying visible files to an external drive for analysis
Creating a bit-by-bit image using write blockers
Taking screenshots of active processes and file directories
Running a virus scan on the original drive to identify malware
Answer Description
The correct answer is creating a bit-by-bit image using write blockers. When performing digital forensics, maintaining evidence integrity is paramount. A bit-by-bit image (also called a forensic image or bitstream copy) creates an exact duplicate of the original media at the binary level, including deleted files, slack space, and unallocated space. Write blockers are hardware or software tools that prevent any modifications to the original evidence during the imaging process, ensuring that the original data remains unchanged and maintaining the chain of custody. This approach preserves the integrity of the evidence and follows proper forensic procedures.
Hashing the original and the copy ensures that they are identical, which is part of the verification process but not the primary imaging method. Taking screenshots provides only visual evidence of visible data but doesn't capture hidden or deleted data. Running a virus scan on the original drive would potentially modify file access times and could destroy evidence, violating a fundamental principle of digital forensics: do not alter the original evidence.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is a bit-by-bit image necessary instead of just copying visible files?
How do write blockers ensure the integrity of forensic evidence?
How does hashing verify that the image matches the original drive?
A financial services company is developing a new mobile application. The security team has proposed that every user action, including non-transactional views like checking a balance, must be re-authenticated with both a password and a time-based one-time password (TOTP). The product manager argues that this excessive friction will lead users to abandon the app or find insecure workarounds. Which statement BEST represents the security principle the product manager is advocating for in this scenario?
Security mechanisms should be visibly present to discourage attackers
Security mechanisms should be complex enough to demonstrate thorough protection
Security mechanisms should be designed primarily to make users feel protected
Security mechanisms should be transparent enough that they don't unnecessarily impede legitimate users
Answer Description
The correct answer is Security mechanisms should be transparent enough that they don't unnecessarily impede legitimate users. The principle of balancing security with usability recognizes that security controls that are overly burdensome to legitimate users will likely be circumvented, potentially creating new vulnerabilities. Effective security should maintain protection while being as transparent as possible to authorized users. This aligns with the concept of economy of mechanism in security design, which emphasizes simplicity and usability.
Security mechanisms should be visibly present to discourage attackers is incorrect because while security visibility may have deterrent value in some contexts, it often conflicts with usability goals. Highly visible security mechanisms can create friction for legitimate users and don't necessarily improve actual security protection.
Security mechanisms should be designed primarily to make users feel protected is incorrect because the perception of security is less important than actual security effectiveness. Security mechanisms should provide real protection rather than just creating a feeling of safety, which could give users a false sense of security.
Security mechanisms should be complex enough to demonstrate thorough protection is incorrect because complexity typically contradicts good security design principles like economy of mechanism. Complex security mechanisms are generally less user-friendly, more difficult to implement correctly, and more likely to contain vulnerabilities. Effective security should be as simple as possible while achieving protection goals.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the principle of 'economy of mechanism' in security design?
Why is transparency in security mechanisms important for usability?
How does complexity in security mechanisms create vulnerabilities?
An organization is planning to migrate their application infrastructure to a public cloud provider using a Virtual Private Cloud (VPC) architecture. The security team wants to ensure proper network segmentation and isolation between different application tiers. Which VPC design feature would BEST satisfy this requirement?
VPN gateways with encrypted tunnels
Edge locations with distribution policies
Subnets with associated network ACLs and security groups
Transit gateways with route tables
Answer Description
Subnets with associated network ACLs and security groups provide the most comprehensive segmentation solution. Subnets create logically isolated network segments within a VPC, while network ACLs act as stateless firewalls controlling traffic at the subnet level. Security groups function as stateful firewalls at the instance level. Together, they implement defense-in-depth by creating logical boundaries between application tiers.
VPN gateways connect on-premises networks to VPCs but do not address internal segmentation. Transit gateways connect multiple VPCs but lack fine-grained segmentation capabilities. Edge locations are for content distribution, not network segmentation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between network ACLs and security groups in a VPC?
How do subnets enable logical network segmentation in a VPC?
Why are VPN gateways insufficient for internal VPC network segmentation?
During the recovery phase of a major data breach incident, the security team has restored critical systems from backups and verified data integrity. What is the BEST next step to take before returning systems to production?
Apply security configurations and patches or updates that were missing before the incident
Restore user access to systems and data
Update the incident status in the tracking system
Document recovery actions taken in the incident report
Answer Description
The correct answer is to apply security configurations and patches or updates that were missing before the incident. This step is crucial in the recovery process because returning systems to production without addressing the original vulnerability would likely result in a recurring breach.
While documenting the recovery actions taken is important, it can be completed after systems are secure and operational. Updating the incident status would be premature without ensuring the vulnerability is addressed. Restoring user access at this stage could potentially re-expose the system to threats if the original vulnerability hasn't been patched. The application of security patches addresses the root cause of the incident and helps prevent similar incidents in the future, making it the most critical next step in the recovery process.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is applying security patches critical before returning systems to production?
What are security configurations, and how do they help in the recovery phase?
What could happen if user access is restored before vulnerabilities are addressed?
During a digital forensic investigation of a suspected intellectual property theft, which investigative technique establishes a documented process that tracks who has handled evidence from the moment of collection through analysis and final presentation?
Data carving
Chain of custody
Hash verification
Timeline analysis
Answer Description
Chain of custody is the correct answer because it refers to the documented chronological history of the handling, analysis, and preservation of evidence from the time it is obtained until it is presented in court. This documentation ensures that evidence remains authentic and unaltered throughout the investigative process. The chain of custody process is essential for maintaining evidence admissibility in legal proceedings, as it establishes who had possession of evidence at all times and what actions were performed on it. Other investigative techniques like data carving, timeline analysis, and hash verification are important forensic methods but do not specifically refer to documenting the control and handling history of evidence.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is the chain of custody essential in forensic investigations?
What are the key components of maintaining a chain of custody?
How does chain of custody differ from hash verification in digital forensics?
A security architect at a large financial services company is designing a new system for high-value transactions. To mitigate the risk of internal fraud, the design mandates that no single employee can initiate, approve, and finalize a transfer. Instead, these actions must be assigned to different individuals based on their defined job functions. Which security principle is most directly and fundamentally addressed by this design requirement?
Segregation of Duties
Defense in Depth
Least Privilege
Role-based Access Control
Answer Description
Segregation of Duties (SoD) is a security principle that divides critical functions among different individuals to prevent fraud, errors, and abuse by ensuring that no single person has complete control over a transaction or process. By requiring multiple people to be involved in sensitive transactions like initiating, approving, and finalizing a transfer, SoD creates a system of checks and balances, making it significantly more difficult for any single person to commit fraud without collusion.
The other options are incorrect because:
- Defense in depth involves implementing multiple layers of security controls, and while SoD can be one of those layers, it is not the overarching principle of layering itself.
- Least privilege relates to providing the minimum necessary access rights for a user to perform their job functions. While related, it doesn't specifically address the requirement of splitting a single process among multiple people.
- Role-based access control (RBAC) is a method of implementing access control based on job functions. It is a common and effective way to enforce SoD, but SoD is the fundamental principle being applied, not the implementation mechanism.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does Segregation of Duties reduce fraud risks?
How is Segregation of Duties implemented in an organization?
Is Role-Based Access Control related to Segregation of Duties?
A financial services company is designing the physical security for its new data center, which will house sensitive customer data and critical servers. The chief security officer's primary goal is to prevent unauthorized physical access by implementing the most effective and resilient strategy. Which of the following approaches should the security architect recommend?
An advanced IP-based surveillance system with AI-powered threat detection to monitor all areas of the data center.
Biometric iris scanners at the data center's main entrance as the sole mechanism for entry.
A 24/7 on-site security guard force responsible for visually verifying all individuals entering and exiting the facility.
A defense-in-depth strategy combining mantraps, multifactor authentication (card + PIN) at the data hall entrance, and individually locked server cages.
Answer Description
The most effective and resilient strategy is a defense-in-depth approach that layers multiple, different controls. Combining mantraps, multifactor authentication, and locked server cages creates redundancy and mitigates the risk of a single control's failure. Relying solely on biometric scanners creates a single point of failure and does not protect assets within the data hall. A guard force alone is subject to human error and social engineering. A surveillance system is a detective and deterrent control, but it does not physically prevent access, which is the primary goal.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is biometric access control and how does it work?
Why is layering security measures better than relying on a single method?
What are the vulnerabilities of RFID tags and traditional key locks?
A global enterprise is developing a strategy to secure its diverse information repositories containing varying levels of sensitive content. Which of the following approaches would be BEST for controlling access to their information assets?
Requiring management authorization for information retrieval
Applying uniform encryption across corporate assets
Implementing network segmentation for information repositories
Implementing tiered protection controls based on information sensitivity levels
Answer Description
The correct answer is implementing tiered protection controls based on information sensitivity levels. This approach ensures that security measures are proportional to the value and sensitivity of different types of information. By categorizing information (such as Public, Internal, Confidential, and Restricted) and then applying appropriate controls to each tier, an organization can balance security requirements with operational needs. More sensitive information receives stronger protections while less sensitive information remains accessible with appropriate but less stringent controls.
Applying uniform encryption across corporate assets would create unnecessary overhead for less sensitive information and potentially impact system performance and usability. Requiring management authorization for information retrieval would create significant operational bottlenecks and isn't practical for day-to-day operations. Implementing network segmentation addresses the network architecture but doesn't specifically target the information assets themselves based on their varying sensitivity levels.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are tiered protection controls?
How do organizations typically categorize information sensitivity levels?
Why is applying uniform encryption across assets less effective?
An organization's security team has collected digital evidence during an investigation of a potential data breach. Which of the following is the BEST approach for storing this evidence to maintain its admissibility in court?
Implement a secure storage facility with access controls, documentation of evidence handling, and physical protection measures
Utilize remote cloud storage with encryption and authentication safeguards
Conduct integrity verification by updating file timestamps to confirm system operation
Create copies of digital evidence and distribute them to security team members for parallel analysis
Answer Description
The correct answer is to implement a secure storage facility with access controls, documentation of evidence handling, and physical protection measures. When storing digital evidence, maintaining the Chain of Custody (CoC) is paramount to ensure admissibility in court proceedings. This means documenting who has handled the evidence, when they accessed it, and for what purpose. Additionally, physical security measures such as controlled access to the storage facility and tamper-evident containers provide protection against unauthorized access or manipulation. Temperature and humidity controls are important for certain types of physical evidence but are secondary to CoC documentation for digital evidence. While integrity checks are valuable, they should be conducted in a manner that doesn't alter the original evidence. Making copies for analysis is a good practice but doesn't replace proper storage procedures. Remote cloud storage introduces potential CoC challenges that could compromise admissibility.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Chain of Custody (CoC) and why is it important for digital evidence?
What measures are included in physical security for digital evidence storage?
Why is using remote cloud storage not ideal for storing digital evidence?
During a merger, your company must transmit large archives of personally identifiable information (PII) from a secure on-premises data center to a partner's private cloud over the public Internet. Regulatory requirements mandate that confidentiality and integrity be preserved even if network traffic is intercepted. Which control provides the MOST effective protection for these files while they are in transit between the two environments?
Implement just-in-time (JIT) network access for the destination servers
Establish an IPsec VPN or TLS session to encrypt the data during transmission
Restrict transfer pathways to pre-approved subnets using network segmentation
Mask sensitive fields in the data sets before initiating the transfer
Answer Description
Encrypting the traffic with protocols such as TLS or an IPsec VPN protects confidentiality and integrity by ensuring only authenticated endpoints holding the correct keys can read or modify the data. Network segmentation or just-in-time access reduce the attack surface but do not safeguard packets from interception. Data masking hides values for test or analytics use but offers no cryptographic assurance during transfer. Therefore, encryption in transit is the most effective measure.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is encryption, and how does it protect data in transit?
How does TLS differ from IPSec for securing data in transit?
What is the difference between data encryption and data masking?
Which approach provides stronger security by default when controlling application execution on organizational systems?
Whitelisting
Hybrid listing
Blacklisting
Graylisting
Answer Description
Whitelisting provides stronger security by default because it follows a deny-by-default approach where only explicitly approved applications are permitted to run. This creates a more restrictive security posture than blacklisting, which allows all applications to run except those specifically prohibited. With whitelisting, unknown or unauthorized applications cannot execute at all, significantly reducing the attack surface. Blacklisting requires constant updates to catch new malicious applications and is reactive rather than proactive. Application control based on whitelisting is generally considered more secure but requires more administrative overhead to implement and maintain.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is application whitelisting?
How does whitelisting differ from blacklisting?
What are some challenges of implementing application whitelisting?
A security operations center (SOC) analyst receives a high-priority alert for an email with a suspicious, unknown executable file sent to a senior executive. To analyze the file's behavior and potential threat without jeopardizing the production network or the user's workstation, which of the following is the most appropriate initial action for the analyst to take?
Execute the file within an isolated virtual environment to observe its behavior.
Forward the executable to a third-party antivirus vendor for signature creation.
Run a full antivirus scan on the executive's workstation.
Immediately delete the email from the executive's inbox to prevent execution.
Answer Description
The correct action is to execute the file in a sandbox. A sandbox is an isolated, controlled environment where potentially malicious code can be run and analyzed without affecting the production system or network. This allows the analyst to observe the file's behavior, such as network connections, file modifications, or registry changes, to determine if it is malicious. Deleting the email removes the immediate threat but prevents analysis. Running a local AV scan may not detect an unknown or zero-day threat. Forwarding to a vendor is a valid step but not the best initial action for immediate internal analysis.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does a sandbox work in cybersecurity?
What are common uses of sandboxes in security operations?
What’s the difference between a virtual machine and a sandbox?
An organization is looking to enhance its security posture by improving the management of credentials for privileged accounts, such as domain administrators and root users. Which of the following approaches provides the most comprehensive security controls for this specific use case?
Enforcing multi-factor authentication (MFA) for all administrative account logons through a federated identity provider.
Mandating the use of long, complex passwords for all privileged accounts, with a policy requiring rotation every 90 days.
Implementing a Privileged Access Management (PAM) solution that includes credential vaulting, session monitoring, and automated password rotation.
Storing all privileged credentials in a dedicated enterprise password vault shared among authorized administrators.
Answer Description
A Privileged Access Management (PAM) solution provides the most comprehensive set of security controls for managing high-risk privileged accounts. PAM solutions go beyond simple credential storage by offering a suite of features including secure vaulting, automated password rotation to limit the lifespan of a credential, and session monitoring or recording to audit all privileged activities. While an enterprise password vault offers secure storage, it typically lacks the advanced monitoring and automated lifecycle management features of a full PAM system. Enforcing MFA is a critical control for authenticating administrators but does not manage the credential's lifecycle or monitor its use during a session. A strong password policy is a fundamental baseline but does not provide the active management, monitoring, and automated controls necessary to adequately protect privileged access.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Privileged Access Management (PAM) solution?
How does PAM differ from a traditional enterprise password vault?
Why is automated password rotation important in PAM?
A global financial institution is decommissioning an old data center containing legacy systems with sensitive customer financial data. The CISO has asked you to develop a secure disposal plan for these systems. Which approach would BEST ensure the institution meets its security and compliance obligations?
Transfer necessary data to new systems and securely destroy hardware components with physical destruction methods
Conduct a data classification review, then apply appropriate sanitization methods based on data sensitivity and storage media
Perform system backups as required then format storage devices
Outsource the disposal to a reputable third-party vendor that meets security and compliance standards
Answer Description
The correct answer is to conduct a data classification review, then apply appropriate sanitization methods based on data sensitivity. This approach follows security best practices for system retirement by first understanding what types of data exist on the systems (through classification), and then applying the appropriate data destruction techniques based on that classification. Different types of data require different levels of sanitization - some may require complete physical destruction while others might only need secure wiping. This methodical approach ensures compliance with regulations while protecting sensitive information.
The other options are incorrect because:
- Simply transferring data to new systems before physical destruction doesn't address proper data sanitization and may leave sensitive information vulnerable during transfer.
- Performing backups without classification doesn't address how to properly destroy the data according to its sensitivity level.
- Outsourcing to a vendor without specific security requirements puts the organization at risk of improper disposal practices that could lead to data breaches.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is a data classification review important before data sanitization?
What are the main types of data sanitization methods?
What risks are involved in outsourcing data disposal to third-party vendors?
A security consultant discovers a critical vulnerability in a client's system during an assessment. After notifying the client, they learn the client plans to delay patching for 6 months due to business priorities, despite the significant risk. According to the ISC2 Code of Professional Ethics, what is the BEST action for the consultant to take?
Report the vulnerability to relevant regulatory authorities due to the client's decision to delay patching
Inform other security professionals about the vulnerability to determine the appropriate response
Implement patches without informing the client to safeguard against potential breaches
Document the risk, offer remediation recommendations, and have management acknowledge the risk
Answer Description
The correct answer is to document the risk, offer remediation recommendations, and have management acknowledge the risk. This approach aligns with the ISC2 Code of Professional Ethics, particularly the Canon of protecting society, the common good, and the infrastructure. While the consultant has an ethical obligation to ensure the client understands the risks, the consultant cannot force the client to implement fixes on a specific timeline. The consultant should document the risks and recommendations, obtain acknowledgment from management, and respect the client's business decisions. The other options either breach confidentiality (by disclosing to third parties or regulatory bodies), exceed the consultant's authority (by implementing patches without permission), or fail to fulfill the consultant's duty to properly inform the client of risks.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the ISC2 Code of Professional Ethics?
Why is it important to have management acknowledge risks in security assessments?
Why is it unethical to implement patches without the client’s permission?
Neat!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.