CompTIA Server+ Practice Test (SK0-005)
Use the form below to configure your CompTIA Server+ Practice Test (SK0-005). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

CompTIA Server+ SK0-005 Information
The CompTIA Server+ (SK0‑005) certification is tailored for IT professionals aiming to validate their proficiency in installing, managing, securing, and troubleshooting server systems across data center, on‑premises, and hybrid environments. Launched in May 2021, this mid‑level exam comprises up to 90 multiple‑choice and performance‑based questions, to be completed in 90 minutes, and requires a passing score of 750 on a 100–900 scale. Candidates are expected to have approximately two years of hands‑on experience in server environments and should possess foundational knowledge equivalent to CompTIA A+ certification.
The exam covers four core domains: Server Hardware Installation and Management (18%), Server Administration (30%), Security and Disaster Recovery (24%), and Troubleshooting (28%).
The hardware domain includes tasks like racking servers, managing power and network cabling, configuring RAID, and maintaining various drive types, from SSDs to hybrid systems.
The administration domain focuses on OS installation (GUI, core, virtualized, or scripted), network configuration, server roles and virtualization, scripting basics, asset documentation, backup of configurations, and licensing concepts .
Security and disaster recovery encompass server hardening techniques, physical and data security, identity and access management, backup strategies (full, incremental, snapshot), and recovery planning including hot, warm, cold, and cloud-based site setup .
The troubleshooting domain emphasizes systematic problem-solving across hardware, storage, OS and software, network connectivity, and security issues, involving techniques such as diagnostics, log analysis, reseating components, and resolving boot errors or DHCP/DNS issues .
Aspiring candidates should follow a structured preparation plan using official exam objectives to guide their study. Practical experience and familiarity with real-world scenarios—especially using hands-on labs, performance-based exercises, scripting tasks, RAID configuration, virtualization, and disaster recovery setups—can significantly enhance readiness. This targeted strategy helps ensure both technical competence and confidence when tackling the SK0-005 Server+ exam.
Free CompTIA Server+ SK0-005 Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 15
- Time: Unlimited
- Included Topics:Server Hardware Installation and ManagementServer AdministrationSecurity and Disaster RecoveryTroubleshooting
A server administrator has just updated the storage controller drivers on a Windows Server. Upon restarting, the server hangs on the Windows loading screen and fails to boot to the login prompt. The administrator has already attempted a simple reboot with the same result. Which of the following is the MOST appropriate next step to diagnose this issue?
Boot the server into Safe Mode.
Use the
runas
command to escalate privileges.Run hardware diagnostics from the server's UEFI/BIOS.
Reload the OS from the last known-good backup.
Answer Description
The correct answer is to boot the server into Safe Mode. Safe Mode starts Windows with a minimal set of drivers and services, which is ideal for troubleshooting issues like the one described. Since the problem occurred immediately after a driver update, it is highly probable that the new driver is causing a conflict that prevents the OS from loading normally. Booting into Safe Mode would likely bypass the faulty driver, allowing the administrator to log in, access Device Manager, and roll back or uninstall the problematic driver.
Reloading the OS from a backup is a recovery step, not a diagnostic one. It should be considered only after troubleshooting attempts have failed, as it is a more drastic measure. Running hardware diagnostics is inappropriate because the scenario strongly indicates a software problem (a driver update) rather than a hardware failure. The runas
command is used to execute a program with different user credentials while already logged into the OS and is irrelevant for a server that cannot boot.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is Safe Mode ideal for troubleshooting driver issues?
How can an administrator roll back a driver update in Safe Mode?
What steps can be taken if Safe Mode fails to resolve the issue?
Why does Safe Mode help troubleshoot driver issues?
What is Device Manager and how can it help fix driver issues?
What is a storage controller driver, and why is it important?
A systems administrator just finished rerouting power cables within a server rack to improve airflow. Upon rebooting a 2U server, the integrated RAID controller's firmware reports that all drives in a previously healthy RAID 5 array are now offline. The activity LEDs for all drives in the chassis are unlit. Which of the following is the MOST probable cause of this issue?
A recent OS patch caused a driver incompatibility.
RAID controller cache battery failure.
Loose backplane power or data connector.
Simultaneous failure of multiple physical drives.
Answer Description
The most probable cause is a loose power or data connector on the drive backplane. The recent physical maintenance involving cable rerouting makes a physical connectivity issue highly likely. Since all drives, which are connected via the backplane, went offline simultaneously and their activity lights are off, it points to a common failure point that provides both power and data connectivity to the entire set of drives. Reseating the backplane's main power and data (SAS/SATA) cables is the correct first step in troubleshooting.
- A RAID controller cache battery failure typically results in the write cache being disabled, leading to significant performance degradation, but it does not usually cause the entire array to go offline. The drives would still be detected by the controller.
- A simultaneous failure of multiple physical drives is statistically very improbable. Following troubleshooting methodology, a common point of failure should be investigated before assuming multiple independent failures.
- An OS patch failure would manifest after the hardware initialization phase. Since the issue is reported by the RAID controller's firmware before the operating system boots, an OS-level problem is not the root cause.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a backplane in server hardware?
How does RAID 5 handle drive failures?
Why is simultaneous failure of multiple drives unlikely in RAID arrays?
What is a RAID controller and how does it manage storage systems?
What is a server backplane, and how does it connect to storage devices?
What is RAID 5, and why is simultaneous failure of multiple drives unlikely?
A systems administrator is troubleshooting a connectivity issue on a multi-homed server, SRV-MULTI01. The server has two NICs: NIC1 is on the 192.168.50.0/24
subnet for user traffic, and NIC2 is on the 10.0.10.0/24
subnet for management traffic. The server can successfully communicate with all devices on the 192.168.50.0/24
subnet and can reach the internet for OS updates via its default gateway. However, it is unable to connect to a monitoring server located at the IP address 10.0.20.15
. Other servers on the 192.168.50.0/24
subnet can reach the monitoring server without issue. The administrator has already confirmed that local firewall rules on SRV-MULTI01 are not blocking the traffic.
Which of the following is the MOST likely cause of this issue?
The server's operating system has an incorrect route table configuration.
The DNS server is failing to resolve the hostname of the monitoring server.
The DHCP server is assigning an incorrect default gateway to NIC2.
The switch port for NIC1 is configured with an incorrect VLAN tag.
Answer Description
The correct answer is that the server's operating system has an incorrect route table configuration. In a multi-homed server scenario, the OS must have a correctly configured routing table to direct traffic to networks not on directly connected subnets. Since the server can reach its local subnets and the internet (via the default gateway, likely on NIC1), but not a specific remote subnet (10.0.20.0/24
), it indicates a missing or incorrect static route. The OS doesn't know to send traffic for 10.0.20.15
via the gateway on the management network (NIC2). An administrator would use a command like route print
on Windows or ip route
on Linux to view the table and add a persistent static route to resolve the issue.
- A VLAN misconfiguration would likely disrupt all communication on the affected NIC's subnet, but the scenario states that communication on the
192.168.50.0/24
subnet is working correctly. - A DNS server failure is irrelevant because the administrator is attempting to connect via an IP address, which does not require DNS resolution.
- An incorrect default gateway assigned by DHCP is less likely to be the root cause. A multi-homed server typically has only one default gateway to avoid routing conflicts. The problem is the lack of a specific route to a remote network, not an issue with the default route for all outbound traffic.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a route table and how does it work on a multi-homed server?
How can you add a static route in Windows or Linux?
Why is it important to have only one default gateway in a multi-homed configuration?
What is a routing table, and why is it important for servers?
What is a route table in a server's operating system?
What is a multi-homed server, and how does it work?
How can an administrator view and modify the route table on a server?
How is a static route added in Windows and Linux?
Why would a multi-homed server typically have only one default gateway?
A systems administrator is troubleshooting a rackmount server that failed to restart properly following a planned power outage. Upon connecting a monitor, the administrator notes the system date has reset to its manufacturing default and the boot order is incorrect, causing a boot failure. Attempts to correct the settings in the UEFI/BIOS are lost after each power cycle. Which of the following is the MOST likely cause of these issues?
The CMOS battery has failed.
A recent firmware update has corrupted the BIOS.
The power supply unit (PSU) is faulty.
The RAID controller cache battery needs replacement.
Answer Description
The correct answer is that the CMOS battery has failed. The Complementary Metal-Oxide-Semiconductor (CMOS) battery provides constant power to the motherboard to retain system settings such as the system time, date, and hardware configuration (including boot order) stored in the UEFI/BIOS. The symptoms described - a reset system clock and the loss of boot order settings after a power cycle - are classic indicators of a failed CMOS battery. A faulty PSU would likely prevent the server from powering on at all or cause random shutdowns, but it would not specifically erase saved BIOS settings. A failed RAID controller cache battery would affect the storage array's performance or integrity but would not impact the system's main clock or general BIOS settings. A corrupted BIOS might prevent the system from booting entirely or display different error messages, but the ability to enter setup and make changes that are subsequently lost points specifically to the component responsible for retaining those settings: the CMOS battery.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the role of the CMOS battery in a server?
How is the CMOS battery different from the RAID controller cache battery?
What happens if a PSU failure occurs versus CMOS battery failure?
What is the role of the CMOS battery in a server?
How can a systems administrator identify a failing CMOS battery?
What is the difference between BIOS/UEFI settings and RAID controller cache settings?
What is the role of a CMOS battery in a server?
How can you identify a failed CMOS battery?
What steps are involved in replacing a CMOS battery?
An organization operates a virtualization host that contains two processor sockets, each currently populated with a 12-core CPU. The systems team plans to swap both processors for newer 32-core models but will not add any additional sockets or virtual machines. To ensure the operating-system licensing costs remain unchanged after the upgrade, which licensing metric should the team confirm is specified in the vendor contract?
Per-socket licensing that requires one license for each occupied CPU socket
Per-concurrent-user licensing that counts the maximum number of active sessions
Per-core licensing that requires a separate license for every physical CPU core
Per-virtual-machine (per-instance) licensing that charges for each running guest OS
Answer Description
Under a per-socket licensing model, the software vendor charges for each occupied CPU socket, regardless of how many cores are inside that processor. Because the socket count on the host stays at two, replacing 12-core CPUs with 32-core CPUs does not require extra licenses or fees.
By contrast, a per-core model would become more expensive when core counts rise, a per-virtual-machine model ties cost to the number of guest instances rather than hardware, and a per-concurrent-user model is based on active sessions, none of which address the hardware-upgrade scenario described.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is per-socket licensing?
Why does replacing the CPU not affect per-socket licensing costs?
How is per-core licensing different from per-socket licensing?
What is per-socket licensing?
How does per-core licensing differ from per-socket licensing?
What factors should be considered when choosing a licensing model?
A systems administrator applies a critical security update to a production Linux web server that is also a virtual machine. After the mandatory reboot, the main application service fails to start. System logs show "dependency error: incompatible shared library version" for a core library updated by the patch. The administrator confirmed a VM snapshot was successfully created immediately before applying the update. Which of the following is the BEST immediate action to take?
Attempt to manually downgrade only the specific shared library that is causing the incompatibility.
Create a symbolic link from the new library name to the name the application expects.
Immediately search for and install an updated version of the web application that is compatible with the new library.
Revert the VM to the pre-update snapshot to restore service, then analyze the patch in a development environment.
Answer Description
The correct action is to revert the virtual machine to the pre-update snapshot. When a patch or update causes a critical service failure in a production environment, the top priority is to restore service as quickly and safely as possible. Reverting to a known-good snapshot is the most reliable and fastest method to reverse the change that caused the problem. After restoring service, the administrator should then create a plan to test the failed patch in a non-production environment to diagnose the dependency issue without affecting live users. Attempting to manually downgrade a single library or find a new application version on a live production server is risky and time-consuming. Manually creating a symbolic link is an unstable workaround that can cause further system instability and should be avoided in production environments.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a VM snapshot, and why is it important?
What are shared libraries, and why do dependency errors occur?
What is the difference between a production environment and a development environment?
What is a VM snapshot, and why is it useful in this scenario?
What are shared libraries, and why can they cause dependency errors?
Why is testing updates in a non-production environment critical?
What is a VM snapshot and why is it useful?
What is a shared library in Linux, and why might version compatibility be an issue?
Why is reverting to a snapshot preferred over manually fixing issues in production?
During an upgrade of rack servers, you replace separate Fibre Channel HBAs and Ethernet NICs with dual-port converged network adapters (CNAs). Each CNA is cabled to an access-layer switch that is intended to provide Fibre Channel over Ethernet (FCoE). You create virtual Fibre Channel (vFC) interfaces on the switch and bind them to the 10 GbE ports, but the vFC interfaces stay down and the operating system reports no Fibre Channel fabric. The Ethernet interfaces on the same ports are up and carrying IP traffic.
Which switch configuration change is MOST likely required before the CNAs will establish their FCoE links?
Disable jumbo frames by setting the interface MTU back to 1500 bytes.
Aggregate each CNA port into an LACP port-channel with the switch.
Enable Data Center Bridging with Priority Flow Control on the switch ports.
Activate Rapid Spanning Tree PortFast Edge on the access interfaces.
Answer Description
FCoE traffic must traverse a lossless Ethernet link. Losslessness is delivered by IEEE Data Center Bridging features-chiefly Priority Flow Control (PFC, 802.1Qbb)-which are negotiated between the switch and the CNA with DCBX. When PFC is not enabled or the DCBX negotiation fails, the switch marks the vFC interface "FCoE down," so the CNA never completes the FCoE Initialization Protocol. Enabling Data Center Bridging with the appropriate PFC settings lets the switch and CNA agree on a no-drop traffic class; once the link is lossless, FIP succeeds and the vFC interface comes up. Adjusting MTU, forming an LACP port-channel, or toggling Spanning Tree features affects only standard Ethernet operation and does not satisfy FCoE's mandatory lossless requirement, leaving the vFC link down.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Data Center Bridging (DCB) and why is it critical for FCoE?
What is the purpose of Priority Flow Control (PFC) in an FCoE environment?
What role does the Data Center Bridging Exchange (DCBX) protocol play in this setup?
What is Data Center Bridging (DCB)?
How does Priority Flow Control (PFC) enable lossless Ethernet?
What is DCBX, and how does it relate to FCoE connections?
After installing Windows Server 2022 Datacenter on a new rack-mounted server, loading vendor RAID drivers, applying the latest updates, and installing monitoring agents, an administrator plans to use a standalone disk-to-disk hardware duplicator to create physical clones of the system drive for nine identical servers that will be deployed later today. Each target chassis has the same motherboard, storage controller, and firmware settings as the source. To avoid security identifier (SID) conflicts, activation errors, or other duplication problems when the clones first boot on the production network, what action should the administrator take on the reference server immediately before powering it down for cloning?
Disable Secure Boot in UEFI firmware so the duplicated drives will boot in legacy BIOS mode.
Convert the NVMe system disk from GPT to MBR so the boot record copies identically.
Execute Sysprep with the /generalize and /shutdown options to reseal the OS, then power the server off.
Install Microsoft Deployment Toolkit (MDT) and capture the server into a WIM image instead of using the disk duplicator.
Answer Description
Running Sysprep with the /generalize switch removes computer-specific data-including the machine SID, event-log history, and activation identifiers-so that every cloned instance generates unique values during its first boot (OOBE). This procedure is Microsoft's supported method for preparing a Windows installation that will be captured or physically duplicated. Converting the disk to MBR, disabling Secure Boot, or replacing the hardware duplicator with MDT imaging does not address duplicate SID or activation issues and therefore will not prevent the cloning problems described.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does the /generalize switch in Sysprep do?
Why is it important to avoid SID conflicts in a Windows environment?
How does Sysprep differ from imaging tools like MDT?
What does the /generalize option in Sysprep do?
Why is it necessary to avoid duplicate SIDs on a network?
How does Sysprep differ from disk cloning tools in deployment scenarios?
An enterprise subject to SOX compliance requires that any addition or removal of users from security-enabled groups on its Windows Server 2019 domain controllers be traceable for at least 90 days. The Security log is already forwarded to a SIEM with sufficient retention. Which local Windows audit policy should the systems administrator verify is enabled for Success (and preferably Failure) events so that these group-membership changes are recorded?
Audit Object Access
Audit Account Management
Audit Process Tracking
Audit Logon Events
Answer Description
Group-membership changes are written to the Security log only when the Audit Account Management policy (or its granular Advanced Audit sub-categories such as Security Group Management) is enabled. With this setting turned on, event IDs like 4728 (member added to a global group) and 4729 (member removed) are generated, allowing the SIEM to retain evidence for 90 days or longer.
Audit Logon Events records logon and logoff activity, not account or group modifications. Audit Process Tracking captures detailed process start and stop information, which is unrelated to group management. Audit Object Access logs access to files, registry keys, and other securable objects, but it does not capture the creation or modification of security groups. Therefore, enabling Audit Account Management is the correct way to meet the requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is SOX compliance, and why is it important for IT systems?
What specific events are logged under Audit Account Management?
What is a SIEM, and how does it work with audit logs?
What does the 'Audit Account Management' policy track?
What are some examples of advanced sub-categories under 'Audit Account Management'?
How can you verify if the 'Audit Account Management' policy is enabled on a Windows Server?
What specific events does Audit Account Management capture?
What is a SIEM, and how does it help with compliance?
What is the difference between Advanced Audit Policies and basic Audit Policies in Windows?
During a routine risk assessment, you discover that your organization's raised-floor data center uses overhead chilled-water pipes to feed in-row coolers. Facilities management wants an automatic alert the moment even a small amount of liquid escapes from those pipes, long before it can drip onto equipment or the floor. Which environmental control BEST satisfies this requirement?
Mount photoelectric smoke detectors inside the ceiling plenum above the pipe run.
Attach passive infrared (PIR) motion sensors to the pipe supports.
Install a rope-style leak-detection cable along the overhead chilled-water pipe.
Place differential air-pressure sensors between the hot and cold aisles.
Answer Description
A rope-style leak-detection cable is engineered to sense water anywhere along its conductive "rope" and is routinely installed beneath raised floors or secured to overhead piping in data centers. The sensor triggers as soon as moisture touches any point on the cable, providing the earliest possible warning.
Photoelectric smoke detectors look for airborne particulates generated by combustion and cannot detect liquid. Differential air-pressure sensors measure pressure differences between aisles or plenums to validate airflow and HVAC performance, not leaks. Passive infrared (PIR) motion sensors register changes in infrared energy caused by moving people or objects and likewise offer no capability to sense water. Therefore, only the rope-style leak-detection cable meets the stated need for immediate notification of a chilled-water leak.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How does a rope-style leak-detection cable work?
Why are photoelectric smoke detectors unsuitable for detecting leaks?
What are differential air-pressure sensors used for in data centers?
How does a rope-style leak-detection cable work?
What is the difference between a rope-style leak-detection cable and other environmental sensors?
Why is early detection of leaks critical in a raised-floor data center?
How does a rope-style leak-detection cable work?
Why are photoelectric smoke detectors not suitable for detecting water leaks?
What is the difference between differential air-pressure sensors and rope-style leak-detection cables?
A Windows Server 2019 VM in the production cluster has begun blue-screening overnight. For security reasons you are not allowed to open an RDP session to the server, but the machine is online and reporting to Microsoft System Center Configuration Manager (SCCM). You need to grab the server's CCM log files (such as WUAHandler.log and ScanAgent.log) so you can review them on the site server without logging on to the VM. Which SCCM client-notification action should you trigger from the Configuration Manager console to accomplish this task?
Download computer policy
Enable verbose logging
Evaluate software update deployments
Collect Client Logs
Answer Description
The client-diagnostics action Collect Client Logs instructs the selected SCCM client to compress all of its CCM log files (up to 100 MB) and upload them to the management point, where they can be opened from Resource Explorer on the site server. This provides immediate access to the logs needed for troubleshooting without requiring interactive login.
- Enable verbose logging only changes the detail level of future log entries and does not retrieve existing logs.
- Evaluate software update deployments launches a compliance scan but does not copy log files to the site server.
- Download computer policy forces a policy refresh; again, no log files are transferred.
Therefore, Collect Client Logs is the correct action.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is Microsoft System Center Configuration Manager (SCCM)?
What are CCM log files, and what do they include?
How does the 'Collect Client Logs' action work in SCCM?
What is SCCM and what does it do?
What types of logs can SCCM collect, and what is their purpose?
How does the 'Collect Client Logs' action work in SCCM?
A systems administrator notices that several teams are still using an obsolete network-topology diagram after a firewall replacement was completed the previous night. Which of the following process changes would BEST prevent staff from referencing outdated documentation after future infrastructure modifications?
Place a creation-date watermark on every page of each diagram.
Hold a company-wide documentation review meeting every quarter.
Add a mandatory documentation-update task to the change-management closure checklist before an RFC can be completed.
Convert all topology diagrams to read-only PDFs on a shared drive.
Answer Description
Integrating a required "update affected documentation" step into the formal change-management workflow ensures that every Request for Change (RFC) is not closed until runbooks, diagrams, and other artifacts are revised and re-published. ITIL defines a service change as the addition, modification, or removal of a service component and its associated documentation, and lists documentation review/closure as part of the change process. PDF-locking, watermarks, or quarterly reviews can help, but they do not guarantee that documents are updated immediately after each change, so stale information can still circulate.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is ITIL in the context of change management?
What is an RFC, and why is it important in change management?
Why is immediate documentation updating critical after infrastructure changes?
While designing a branch-office server room that uses chilled-water in-row coolers and a 12-inch raised floor, you must recommend an environmental sensor to install underneath the floor tiles so the monitoring system can send an immediate alert if liquid escapes from a pipe. Which type of sensor is the BEST choice?
Three-axis vibration sensor
Differential air-pressure sensor
Passive infrared (PIR) motion sensor
Water-leak detection (rope or spot) sensor
Answer Description
Water-leak detection sensors (often implemented as rope or spot probes) complete an electrical circuit the moment conductive liquid touches the element, allowing the DCIM or BMS to raise an alarm before moisture reaches power or network equipment. Differential air-pressure sensors only measure airflow imbalance, PIR motion sensors detect the heat signature of people or animals, and vibration/accelerometer sensors look for seismic or mechanical shaking-none of which will detect the presence of water.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How do water-leak detection sensors work?
What role does a DCIM or BMS play in monitoring environmental sensors?
What is the purpose of a raised floor in a server room with in-row coolers?
What is a water-leak detection sensor and how does it work?
What is the purpose of a raised floor in server rooms?
What is a DCIM or BMS, and how do they work?
What is a water-leak detection sensor, and how does it work?
Why is a water-leak detection sensor better than other sensors for this scenario?
What are DCIM and BMS, and how do they work with sensors?
A systems administrator is tasked with deploying 25 identical physical servers. The requirements are strict: each server's OS installation must be fully automated, requiring no manual intervention, and must include the latest security patches and drivers from the outset to minimize post-installation configuration. Which of the following methods BEST meets all these requirements?
Manually install the OS on a master server, apply all updates, and then deploy a clone of the master server.
Create a slipstreamed installation media and use an answer file for an unattended installation.
Clone a fully patched virtual machine to each of the physical servers in a V2P conversion.
Perform a network installation of the standard OS, then apply patches using a post-installation script.
Answer Description
The correct answer is to create a slipstreamed installation media and use an answer file for an unattended installation. Slipstreaming is the process of integrating patches, service packs, and drivers directly into the operating system installation source files. This meets the requirement for all updates to be included "from the outset". An unattended installation uses an answer file (e.g., unattend.xml) to provide all the necessary configuration details, which fully automates the setup process and requires no manual intervention. Combining these two methods is the most efficient and accurate way to meet all the scenario's requirements.
- Performing a network installation and then running a post-installation script is incorrect because the patches are applied after the initial OS installation, not from the outset.
- Creating a "golden image" from a manually configured master server is a viable deployment strategy, but it is less flexible than using a slipstreamed source with an answer file. The slipstream/unattended method allows for easier updates to the source and modification of the answer file for different roles, without having to rebuild an entire disk image.
- Cloning a virtual machine to a physical server (V2P) is a complex migration process, not a standard deployment method. It often leads to significant driver and Hardware Abstraction Layer (HAL) issues, making it unsuitable for deploying new, identical servers.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is slipstreaming in the context of OS installation?
What is an answer file, and why is it used in unattended installations?
Why is cloning a virtual machine not suitable for deploying physical servers?
What does slipstreaming mean in the context of OS installations?
What is an answer file, and how does it automate OS installations?
Why is the slipstreamed installation media with an answer file better than using a 'golden image'?
A systems administrator is installing a new 2U server with dual, redundant power supplies into a rack. The rack is equipped with two vertically-mounted Power Distribution Units (PDUs), one on each side. To ensure maximum availability and simplify future maintenance, what is the BEST method for managing the server's power cables?
Plug one power cable into each PDU, then tightly bundle both cables together down the center of the rack for the cleanest appearance.
Connect both power cables to the same PDU to ensure a common grounding path and bundle them neatly with plastic zip ties.
Route each power cable to a separate PDU, using opposite sides of the rack for each cable path, and secure them with hook-and-loop straps.
Use a cable labeling machine to mark each power cord, then plug both into the PDU with the most available outlets.
Answer Description
The correct answer describes the best practice for ensuring high availability and serviceability. Connecting each power supply to a separate PDU on opposite sides of the rack provides power source redundancy and physical path redundancy. If one PDU fails or requires maintenance, the server remains operational via the second PDU. Routing the cables on opposite sides prevents a single physical incident from severing both power connections. Using hook-and-loop straps is the preferred method for securing cables in a data center as they are reusable, adjustable, and less likely to damage cables compared to plastic zip ties which can be over-tightened.
Connecting both power cables to the same PDU creates a single point of failure, negating the benefit of having dual power supplies. Bundling the two redundant power cables together also creates a physical single point of failure; if the bundle is accidentally cut or unplugged, the server loses all power. While labeling cables is a good practice, it does not address the primary requirement of power redundancy and availability.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is it important to route each power cable to a separate PDU on opposite sides of the rack?
What are the advantages of using hook-and-loop straps instead of plastic zip ties for securing cables?
What is the role of redundant power supplies, and why are they critical in servers?
Nice!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.