CompTIA Server+ Practice Test (SK0-005)
Use the form below to configure your CompTIA Server+ Practice Test (SK0-005). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

CompTIA Server+ SK0-005 Information
The CompTIA Server+ (SK0‑005) certification is tailored for IT professionals aiming to validate their proficiency in installing, managing, securing, and troubleshooting server systems across data center, on‑premises, and hybrid environments. Launched in May 2021, this mid‑level exam comprises up to 90 multiple‑choice and performance‑based questions, to be completed in 90 minutes, and requires a passing score of 750 on a 100–900 scale. Candidates are expected to have approximately two years of hands‑on experience in server environments and should possess foundational knowledge equivalent to CompTIA A+ certification.
The exam covers four core domains: Server Hardware Installation and Management (18%), Server Administration (30%), Security and Disaster Recovery (24%), and Troubleshooting (28%).
The hardware domain includes tasks like racking servers, managing power and network cabling, configuring RAID, and maintaining various drive types, from SSDs to hybrid systems.
The administration domain focuses on OS installation (GUI, core, virtualized, or scripted), network configuration, server roles and virtualization, scripting basics, asset documentation, backup of configurations, and licensing concepts .
Security and disaster recovery encompass server hardening techniques, physical and data security, identity and access management, backup strategies (full, incremental, snapshot), and recovery planning including hot, warm, cold, and cloud-based site setup .
The troubleshooting domain emphasizes systematic problem-solving across hardware, storage, OS and software, network connectivity, and security issues, involving techniques such as diagnostics, log analysis, reseating components, and resolving boot errors or DHCP/DNS issues .
Aspiring candidates should follow a structured preparation plan using official exam objectives to guide their study. Practical experience and familiarity with real-world scenarios—especially using hands-on labs, performance-based exercises, scripting tasks, RAID configuration, virtualization, and disaster recovery setups—can significantly enhance readiness. This targeted strategy helps ensure both technical competence and confidence when tackling the SK0-005 Server+ exam.
Free CompTIA Server+ SK0-005 Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 20
- Time: Unlimited
- Included Topics:Server Hardware Installation and ManagementServer AdministrationSecurity and Disaster RecoveryTroubleshooting
During a post-incident review you discover that one senior administrator temporarily stopped event logging on several production servers, made unauthorized changes, and then re-enabled logging to hide the activity. Management wants to implement a control that prevents any single individual from both altering system settings and validating the related audit information. Which of the following actions BEST satisfies this goal?
Install host-based intrusion detection on each server and alert when the logging service is stopped.
Copy all server logs to write-once, read-many (WORM) storage so they cannot be altered after the fact.
Enforce 15-character complex passwords for all privileged accounts and require rotation every 90 days.
Assign log collection and analysis to a security operations group that has no server administration privileges, while system configuration remains with the existing administrators.
Answer Description
The issue is an insider threat enabled by one person holding conflicting duties. Applying separation of roles divides those duties so that the team that manages and reviews audit logs is different from the team that administers the servers. Because the log reviewers have no rights to change configuration, and the server admins have no rights to purge or alter logs, collusion would be required to repeat the attack. Copying logs to WORM media, enforcing complex passwords, or adding a host-based IDS harden security, but none of those measures by themselves stop an administrator from both disabling logging and later certifying that systems are compliant-only a clear separation of duties does.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does separation of duties mean in IT security?
What is WORM storage, and why is it not sufficient in this scenario?
How does host-based intrusion detection work, and why is it not enough here?
During weekend maintenance a systems administrator notices that the organization's Windows Server 2022 file server reaches nearly 100 percent disk I/O and CPU utilization every night around 01:30. A full Microsoft Defender Antivirus scan starts at 01:00, and a Veeam Backup & Replication job that writes to a local disk repository begins at 01:15. The backup job now routinely overruns its maintenance window and reports failures. Compliance requirements mandate keeping the nightly full scan in place, and the backup schedule cannot be moved. Which antivirus configuration change is MOST likely to let the backup finish successfully while still protecting the host?
Raise the antivirus scan priority to High so it completes before the backup starts.
Schedule a PowerShell script to disable real-time protection for the duration of the backup job.
Change the nightly task to a cloud-delivered quick scan instead of a full local scan.
Add folder and process exclusions in Microsoft Defender for all Veeam Backup & Replication executables and the disk-to-disk repository.
Answer Description
Adding targeted folder and process exclusions for the backup application removes the heaviest scanning load from the antivirus engine only for the data paths that cause contention. Microsoft, Veritas and Veeam all document that scanning backup repositories or backup-application binaries can severely reduce throughput or cause job failures, and they recommend excluding those paths or executables to restore performance without disabling protection entirely.
Disabling real-time protection during the job sacrifices continuous malware defense and is unnecessary once exclusions are in place. Replacing the full scan with a quick or cloud-delivered scan would violate the stated compliance requirement to keep the nightly full scan. Increasing scan priority makes the antivirus consume even more CPU and I/O, further starving the backup process. Therefore, defining specific exclusions for the Veeam executables and repository folders is the best host-hardening adjustment in this scenario.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are folder and process exclusions in Microsoft Defender, and why are they useful?
Why is disabling real-time protection during the backup job not recommended?
What issues arise from changing the antivirus scan priority to High during backup operations?
A systems administrator can ping a legacy Windows file server at 10.15.20.45 but cannot map its shared folder by using the server's NetBIOS name. The administrator wants to confirm whether another device is registering the same NetBIOS name by displaying the remote host's NetBIOS name table directly via its IP address. Which nbtstat switch should the administrator use with the IP address to retrieve that information?
-A
-n
-R
-a
Answer Description
The switch that displays the NetBIOS name table of a remote host when the host is identified by its IP address is -A. The -a switch performs the same task only when you specify the remote computer's NetBIOS host name. The -n switch lists the local computer's registered NetBIOS names, and -R purges the local NetBIOS name cache and reloads any #PRE entries from the LMHOSTS file; neither of these displays a remote host's name table.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is NetBIOS and why is it important in networking?
What are the purposes of the various nbtstat switches?
How does NetBIOS name resolution differ from DNS?
A systems administrator is tasked with decommissioning a legacy file server. The administrator has already followed company policy to verify non-utilization by monitoring network traffic and confirming with application owners that the server is no longer accessed. According to best practices, what is the MOST appropriate next step for the administrator to take?
Update the asset management database to mark the server as 'decommissioned'.
Terminate any vendor maintenance contracts associated with the server hardware.
Submit a change management request for the decommissioning.
Physically disconnect the server's network and power cables.
Answer Description
The correct next step after verifying a server is no longer in use is to submit a formal request through the change management process. This ensures that the decommissioning is formally documented, approved by all stakeholders, and scheduled, which prevents unauthorized changes and service disruptions. Updating the asset management database occurs as part of or after the decommissioning is complete, not before formal approval. Physically disconnecting the server is a later action performed after receiving approval from the change management process. Terminating vendor maintenance contracts is also a step that follows the formal decision and approval to decommission the server.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of a change management request?
Why is it important to monitor network traffic before decommissioning a server?
Why is asset management updated after decommissioning and not before?
A systems administrator needs to install a new high-performance GPU, which is a PCIe x16 card, into a server to accelerate data analytics workloads. To achieve maximum throughput, the card must be installed in a slot that provides the full bandwidth. The administrator identifies the following available PCIe 3.0 expansion slots on the motherboard:
- Slot A: PCIe x16 (physical), x8 (electrical)
- Slot B: PCIe x16 (physical), x16 (electrical)
- Slot C: PCIe x8 (physical), x8 (electrical)
- Slot D: PCIe x4 (physical), x4 (electrical)
Which slot MUST the administrator use for the new GPU?
Slot D
Slot C
Slot A
Slot B
Answer Description
The correct answer is the slot designated as PCIe x16 (physical), x16 (electrical). For a PCIe card to operate at its maximum potential, it must be installed in an expansion slot that meets both its physical size and its required number of electrical lanes. A GPU with a PCIe x16 connector requires a physical x16 slot to fit, and it needs x16 electrical lanes to achieve its maximum data transfer bandwidth. The slot that is x16 physical but only x8 electrical would create a bottleneck, halving the potential performance of the card. The x8 and x4 slots are both physically too small for the x16 card and provide insufficient bandwidth.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does 'physical' vs. 'electrical' mean in PCIe slots?
Why does a GPU need x16 electrical lanes for maximum performance?
What are the differences between different PCIe versions (e.g., PCIe 3.0 vs. PCIe 4.0)?
A systems administrator is collaborating with business stakeholders to update the disaster recovery plan for a critical database server. The stakeholders have determined that in the event of a catastrophic failure, the business can withstand a maximum of 15 minutes of data loss. Which of the following metrics does this requirement define?
Recovery Time Objective (RTO)
Mean Time to Recover (MTTR)
Service Level Agreement (SLA)
Recovery Point Objective (RPO)
Answer Description
The correct answer is Recovery Point Objective (RPO). RPO defines the maximum acceptable amount of data loss, measured in time, that a business can tolerate. In this scenario, the 15-minute tolerance for data loss is the RPO.
- Recovery Time Objective (RTO) is incorrect because it specifies the maximum time allowed to recover the system and restore service after a failure, not the amount of data that can be lost.
- A Service Level Agreement (SLA) is a formal contract between a service provider and a customer that documents service expectations. While an SLA would likely contain the required RPO and RTO values, it is the agreement itself, not the specific metric for data loss tolerance.
- Mean Time to Recover (MTTR) is a historical metric representing the average time it has taken to recover from past failures. It is a measure of performance, not a forward-looking planning objective like RPO or RTO.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the key difference between RPO and RTO?
How does RPO affect disaster recovery planning?
How is RPO different from MTTR in disaster recovery metrics?
A systems administrator is scheduled to swap the existing 95 W 8-core Intel Xeon in a 2U virtualization host for a recently released 165 W 24-core model that uses the same LGA socket. A heat-sink kit rated for 225 W has already been ordered so thermal limits will be met. To reduce the risk that the server will fail to POST after the CPU is fitted, which action should the administrator perform before powering the system down for the hardware change?
Move the CMOS-reset jumper to clear NVRAM and force firmware defaults on the next power-on.
Apply the latest BIOS/UEFI update that includes microcode support for the new processor before shutting the server down.
Disable Hyper-Threading in the current firmware configuration so core counts match the outgoing CPU.
Lower the DDR memory speed to the minimum value supported by the platform prior to the change.
Answer Description
Newer processors often require updated microcode and power-management tables that reside in the system's BIOS/UEFI image. If the firmware predates the processor's release, the board may not recognize the silicon and will halt during POST with a processor-incompatibility message. Flashing the motherboard to the latest vendor-approved BIOS/UEFI revision adds the needed microcode, allowing the new CPU to initialize correctly. Clearing CMOS, under-clocking memory, or disabling Hyper-Threading do not add microcode support and therefore will not prevent an incompatibility boot failure.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is microcode support in BIOS/UEFI?
What is POST and why is it important?
Why is updating BIOS/UEFI risky, and how can it be done safely?
Your organization's business-continuity plan specifies a recovery time objective of four days and a 24-hour recovery point objective for a non-critical archival records system. Senior management insists on the lowest possible recurring cost for the alternate facility. Only power, climate control, and network connectivity need to be in place ahead of time; the IT staff is prepared to ship in hardware and restore data from nightly backups after a disaster occurs. Which type of disaster-recovery site BEST meets these requirements?
Active-active metropolitan cluster running live workloads in multiple datacenters
Cold site that supplies only basic utilities and floor space
Partially provisioned warm site with pre-installed servers but inactive data
Fully equipped hot site synchronized with production systems
Answer Description
A cold site is essentially an empty shell that provides only fundamental infrastructure (electricity, HVAC, network access). Because it contains no pre-installed servers, storage, or data, its recurring cost is the lowest of the common DR-site options. The trade-off is a longer recovery time while equipment is delivered, installed, and backups are restored-often several days-making it appropriate when an RTO of multiple days is acceptable.
A hot site, in contrast, is fully equipped and kept in sync with production workloads, offering near-immediate failover but at the highest operating cost. A warm site already hosts some hardware and possibly partial data replication, shortening recovery time but still costing more than a cold site. An active-active metropolitan cluster keeps live services running in two or more datacenters simultaneously; this provides the fastest recovery and virtually no downtime but is by far the most expensive to maintain. Therefore, the cold site is the only option that satisfies the strict cost constraint while meeting the stated four-day RTO.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between a cold site and a hot site?
What do RTO and RPO mean in disaster recovery?
Why is a cold site the best choice for this scenario?
During a memory upgrade, a technician adds four 64 GB DDR4 load-reduced DIMMs (LRDIMMs) to a dual-socket rack server that already contains eight 32 GB DDR4 registered DIMMs (RDIMMs) operating at the same speed. When the server is powered on, it halts during POST with a fatal memory-initialization error and will not boot. According to common vendor memory-population rules, which corrective action will allow the server to start while letting the larger modules remain in service?
Replace all RDIMMs with LRDIMMs so every populated slot uses the same DIMM type and speed.
Install the LRDIMMs only in the channels assigned to CPU 0 and leave the RDIMMs in the channels for CPU 1.
Reduce the memory clock speed in BIOS to the lowest speed supported by both DIMM types.
Move the LRDIMMs to the priority (blue) slots and leave the RDIMMs in the secondary (black) slots within each channel.
Answer Description
Most server vendors state that RDIMMs and LRDIMMs are electrically incompatible and cannot coexist in the same system. Mixing the two types causes memory-training failures, so the system stops at POST. The fix is to use only one DIMM technology across all populated slots. Removing the existing RDIMMs and repopulating every channel exclusively with LRDIMMs of identical type and speed eliminates the incompatibility and allows the server to complete POST. Placing different DIMM types in separate CPUs or slots, lowering the clock speed, or changing slot order does not resolve the fundamental electrical mismatch, so those actions will not restore normal boot operation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why can't RDIMMs and LRDIMMs be used together in the same server?
What is POST, and why does it halt due to memory issues?
What are common vendor memory-population rules for servers?
A systems administrator is tasked with deploying 50 new virtual web servers. Each server must have the same operating system, patch level, and a pre-installed web server application. The administrator's primary goals are to ensure configuration consistency and minimize deployment time. Which of the following installation methods BEST meets these requirements?
Template deployment
Unattended installation
Bare metal installation
P2V conversion
Answer Description
The correct answer is template deployment. A virtual machine (VM) template is a master copy of a virtual machine that includes a pre-installed operating system, applications, and all necessary configurations and patches. Deploying new VMs from a template is the most efficient method for creating multiple, identical servers because it simply clones the master image, which is significantly faster than performing a full OS installation for each machine. This method ensures consistency and drastically reduces deployment time, which directly addresses the scenario's requirements.
- An unattended installation uses a script or answer file to automate the steps of a traditional OS installation. While this provides consistency, it is slower than template deployment because it must run through the entire installation and patching process for each of the 50 servers individually.
- A Physical to Virtual (P2V) conversion is a process used to migrate an existing physical server into a virtual machine. It is not a method for deploying multiple new servers.
- A bare metal installation involves installing an operating system directly onto a physical server's hardware. This is incorrect because the scenario specifies the deployment of virtual servers.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a VM template?
Why is template deployment faster than unattended installation?
What is the difference between P2V conversion and template deployment?
During post-deployment checks on a Windows Server 2019 file server, you discover that a brand-new 4 TB SATA disk shows a single NTFS primary partition that is exactly 2 TB. About 1.8 TB of capacity appears as Unallocated, and the Extend Volume option is unavailable even though SMART and controller diagnostics report no faults. Which action will correct the underlying partition error and let the server use the disk's full capacity?
Back up the data, delete the current partition table, initialize the disk as GPT, recreate the volume, and then restore the data.
Replace the disk because sectors beyond the 2 TB boundary are physically defective.
Update the drive firmware so the disk reports 4 KB native sectors instead of 512-byte emulation.
Reformat the existing 2 TB partition with a larger (64 KB) NTFS cluster size to correct the bitmap and then extend the volume.
Answer Description
The administrator accepted Disk Management's default Master Boot Record (MBR) style when the disk was first initialized. MBR can address only 2 199 023 255 552 bytes (≈ 2.2 TB) because its 32-bit LBA field tops out at 4 294 967 295 sectors. Everything beyond that limit shows up as unallocated and cannot be added to an existing MBR volume. Re-initializing the disk with the GUID Partition Table (GPT) removes the 2 TB ceiling. Because Windows cannot convert a non-system MBR disk to GPT in-place, the safest remediation is to back up the data, delete the partition table, initialize the disk as GPT, recreate the volume(s), and restore the data. Changing NTFS cluster size or sector format does not bypass the 32-bit addressing limit, and there is no indication that the hardware is defective, so those options would not resolve the problem.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why does MBR have a 2 TB limit?
What advantages does GPT have over MBR?
Why can't Windows convert an MBR disk to GPT without data loss?
A company's security policy states that archived web-server logs must be deleted after 180 days. The logs are stored on a Linux server that already rotates /var/log/httpd/access.log every day. Which single logrotate directive will ensure that any rotated log files older than the 180-day limit are automatically removed?
Add "maxage 180" to the logrotate configuration
Add the "delaycompress" directive to the rotation stanza
Add "rotate 180" to the logrotate configuration
Add "size 180M" to the logrotate configuration
Answer Description
The maxage directive enforces time-based retention. Setting "maxage 180" tells logrotate to discard rotated archives that are more than 180 days old, regardless of how many files exist. The rotate directive limits the number of archives kept, not their age. The size directive triggers rotation when a file reaches a given size but does not control retention, and delaycompress only postpones compression of the most recent archive; it has no effect on how long archives are kept.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does the 'maxage' directive do in logrotate?
How is 'rotate' different from 'maxage' in logrotate?
What is the purpose of the 'delaycompress' directive in logrotate?
Your organization follows a backup policy that performs a full backup every Sunday at 01:00 and an incremental backup at 01:00 on every other day of the week. At 15:30 on Thursday a critical file server fails and must be restored to its most recent state. Which backup set(s) must be restored - and in what order - to complete the recovery?
The Sunday full backup, followed by the incremental backups from Monday, Tuesday, Wednesday, and Thursday in chronological order.
The Sunday full backup and the Thursday incremental backup.
The Sunday full backup, followed by the incremental backups from Monday, Tuesday, and Wednesday in chronological order.
Only the Thursday incremental backup.
Answer Description
An incremental backup saves only the data that has changed since the last backup of any type (full or incremental). To perform a complete restore from an incremental backup chain, you must start with the last full backup and then apply every subsequent incremental backup in the order they were created until the point of recovery. In this scenario, the server fails on Thursday afternoon. The backups available are the full backup from Sunday, and the incremental backups from Monday, Tuesday, Wednesday, and Thursday. Therefore, the correct restore procedure is to first restore the Sunday full backup, and then apply the Monday, Tuesday, Wednesday, and Thursday incremental backups in chronological order. Skipping any incremental backup in the sequence would result in an incomplete data set and data loss.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an incremental backup?
Why is the order of restoring backups important?
What are the advantages and disadvantages of using incremental backups?
A systems administrator must validate the organization's disaster-recovery plan by running a simulated failover of several virtual machines that continuously replicate to a warm site. Management requires that production workloads keep running, replication remain active, and no duplicate IP addresses appear on the corporate network during the test. Which action is required to meet these objectives?
Modify internal DNS and default-gateway records so users resolve services at the DR site.
Start the replica VMs on an isolated test network segment that is not routable to production.
Pause the replication job, then attach the replicas to the production VLAN at the DR site.
Shut down the primary VMs before powering on their replicas at the warm site.
Answer Description
A simulated (test) failover boots duplicate virtual machines at the recovery site so staff can verify that operating systems and applications start correctly. To avoid service disruption or IP conflicts with the still-running production VMs, the replicas must be connected to an isolated, non-routable network segment. Pausing replication is unnecessary and would defeat the goal of continuing protection, shutting down the primary VMs turns the exercise into a live failover, and updating DNS redirects users away from the primary site-both actions management explicitly forbids.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a simulated failover in disaster recovery testing?
What is a 'non-routable network segment,' and why is it used during a simulated failover?
Why is pausing replication or shutting down primary VMs not appropriate during a simulated failover?
A systems administrator applies the latest OS security patches to a production Linux server that hosts a custom financial application. Following the mandatory reboot, the application service fails to start. The administrator inspects the application's error logs and discovers messages stating, "error while loading shared libraries... cannot open shared object file: incompatible version". The server and the application were fully functional before the update. Which of the following is the MOST likely cause of this application failure?
The patch file was corrupted during download and did not install correctly.
The security patch updated a shared library to a version that is incompatible with the application.
The server has insufficient disk space, preventing the application from starting.
The application's service account permissions were reset during the patching process.
Answer Description
The correct answer is that the security patch updated a shared library to a version incompatible with the application. OS updates and patches often include updated versions of shared system libraries for security and functionality improvements. However, custom-developed applications can be dependent on specific versions of these libraries. When a library is updated to a version the application was not designed for, it can lead to dependency conflicts, causing the application to fail. The error message "incompatible version" in the logs points directly to this cause. This scenario is a classic example of a downstream failure caused by an update.
- Reset service account permissions would likely produce 'access denied' errors, not library version errors.
- A corrupted patch file would more likely cause the entire update process to fail rather than causing a specific post-reboot runtime error.
- Insufficient disk space would prevent new log entries or cause other file-write errors, which is not the issue indicated by the specific error message.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a shared library in Linux?
How can dependency conflicts between applications and libraries be resolved?
Why are custom applications more vulnerable to shared library updates?
A systems administrator is tasked with upgrading an aging production database server to improve I/O performance for a new application. The plan is to install a new PCIe-based NVMe RAID controller and several high-capacity NVMe drives. To minimize project risk and ensure system stability, which of the following is the most critical first step for the administrator to take before purchasing the new components?
Verify that the combined power draw of the new components does not exceed the server's PSU capacity.
Update the server's BIOS/UEFI and all other firmware to the latest available versions.
Ensure the selected NVMe drives are certified as compatible by the RAID controller's manufacturer.
Consult the server vendor's Hardware Compatibility List (HCL).
Answer Description
The correct answer is to consult the server vendor's Hardware Compatibility List (HCL). The HCL is the authoritative source that lists all hardware components, such as RAID controllers and drives, that have been tested and certified to work with a specific server model. Verifying components against the HCL before purchase is the most critical step to prevent incompatibility issues, which could cause system instability or prevent the server from booting entirely. Updating firmware is a necessary step but should be done after confirming the hardware is compatible, as the HCL often specifies required firmware versions. Checking power consumption is part of the overall installation plan but is secondary to ensuring basic hardware compatibility. Relying on compatibility between the card and drives alone does not guarantee they will work with the specific server motherboard and its BIOS/UEFI.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Hardware Compatibility List (HCL)?
Why is updating firmware not the first priority in this scenario?
How does power consumption factor into hardware upgrades?
A systems administrator is building a direct-attach, high-availability storage solution that will be shared by two rack servers. Each server has its own 12 Gb/s SAS HBA, and the external JBOD enclosure routes two independent links from every drive bay-one link to each HBA. The design goal is for the disk set to remain online if an HBA, cable, or I/O module on either path fails, without using interposer boards or additional electronics. Which type of disk interface must the administrator order for the eight drive bays to satisfy these requirements?
NVMe U.2 solid-state drives over PCIe Gen3 x4
12 Gb/s SAS drives that use a single wide x4 port for bandwidth aggregation
SATA III (6 Gb/s) drives attached through a port multiplier
Dual-port 12 Gb/s SAS drives
Answer Description
Only dual-port SAS disks expose two completely independent target ports on the drive itself. In a dual-domain topology each port can be cabled to a different HBA, so the operating system or RAID controller can continue I/O through the surviving path if the other path fails. SATA drives are single-port devices; they would need an interposer or port-multiplier and still could not provide true redundant access. NVMe U.2 SSDs attach over PCIe rather than SAS and cannot connect to the mini-SAS HD backplane used in this design. A wide-port SAS implementation aggregates several PHYs for more bandwidth between controllers and expanders, but an individual drive still presents only one domain and therefore does not add path redundancy. Dual-port 12 Gb/s SAS drives are therefore the only option that meets every stated requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does 'dual-port SAS' mean, and how does it enable redundancy?
What is the difference between a single-port and dual-port SAS drive?
Why can’t SATA or NVMe drives provide the same redundancy as dual-port SAS drives?
A financial institution requires a disaster recovery solution for its primary online transaction processing (OLTP) database. A key requirement is to ensure zero data loss (RPO of zero) in case of a site failure. The solution must guarantee that a transaction is written to both the primary and the secondary data centers before it is considered complete. Which of the following replication methods BEST meets this requirement?
Asynchronous replication
Synchronous replication
Bidirectional replication
Snapshot replication
Answer Description
The correct answer is synchronous replication. This method writes data to both the primary and secondary storage locations at the same time. A write I/O operation is not considered complete until an acknowledgment is received from both the primary and secondary sites. This ensures that the data at both sites is identical, which achieves a Recovery Point Objective (RPO) of zero, meaning no data is lost during a failover. This is critical for transactional systems where data integrity is paramount.
- Asynchronous replication is incorrect because it writes data to the primary storage first and then copies it to the secondary site after a delay. This creates a non-zero RPO, as some data may not have been replicated at the time of a failure.
- Snapshot replication is incorrect as it captures the state of data at specific points in time. This method is suitable for less critical systems where some data loss between snapshots is acceptable, but it does not meet the zero RPO requirement for a critical OLTP database.
- Bidirectional replication describes a topology where two sites can send and receive replicated data, allowing either to act as the primary. While useful for active-active scenarios, it describes the direction of data flow, not the timing mechanism that guarantees zero data loss. Bidirectional replication can be implemented either synchronously or asynchronously, but synchronous replication is the specific method that meets the zero RPO requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Recovery Point Objective (RPO), and why is it important?
How does synchronous replication achieve zero RPO?
What are common challenges or downsides of using synchronous replication?
During a quarterly security audit, the facilities team wants to know how often the biometric fingerprint readers at each data-center entrance mistakenly permit an individual who is not enrolled in the system. Which biometric performance metric best quantifies this specific risk?
False acceptance rate
Crossover error rate
False rejection rate
Failure to enroll rate
Answer Description
The metric that measures the probability that an unauthorized (impostor) subject is incorrectly granted access by a biometric system is the false acceptance rate. It directly tracks instances where a reader matches a presented template to a stored template that does not belong to the user, so it is the most relevant value for assessing how frequently attackers might slip through. The false rejection rate instead tracks how often legitimate users are denied, the crossover (or equal) error rate shows the point where false accepts and false rejects are equal-useful for tuning but not for measuring one class of error in isolation-and the failure to enroll rate concerns problems during the initial registration process, not day-to-day access decisions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the false acceptance rate (FAR) and why is it significant?
How does the false rejection rate (FRR) differ from the false acceptance rate (FAR)?
What is the crossover error rate (CER) and how is it used in biometric systems?
A company's backups complete successfully every night, but the servers have never been restored from those backups. An auditor instructs the systems administrator to implement regular testing intervals to validate recoverability without placing an excessive burden on staff or hardware. Which practice BEST meets this requirement?
Review the backup application logs every morning; if no errors are reported, assume the restore process will also succeed.
Perform a full restore of every backup job immediately after it completes to a dedicated sandbox environment.
Document a schedule to restore each critical workload at least monthly or quarterly and repeat the test after any significant system or data change.
Once per year, decrypt a randomly selected backup set to confirm the tape drive operates, then return the media to off-site storage.
Answer Description
Industry guidance recommends running restore tests on a predictable schedule (weekly, monthly or quarterly) that reflects the criticality and change rate of each workload, and repeating the test whenever major system or data changes occur. This approach proves that data can be recovered while keeping the number of tests-and therefore the administrative overhead-manageable.
Answer 1 follows that guidance, so it is correct.
Answer 2 waits an entire year and only decrypts media, which does not confirm that systems or applications can be started from the backup.
Answer 3 relies solely on backup logs; log success does not guarantee a usable restore because media corruption, application-level inconsistencies, or configuration drift can still prevent recovery.
Answer 4 performs a full restore after every backup. Although this would provide strong assurance, it is usually impractical for most organizations because of the time, storage, and compute resources required for daily full-scale restores.
Therefore, scheduling periodic restore drills that are more frequent for critical or recently changed systems is the most appropriate strategy.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is it necessary to test server backups regularly?
What factors determine the frequency of backup restore tests?
What challenges might arise when restoring data from backups?
Cool beans!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.