CompTIA Server+ Practice Test (SK0-005)
Use the form below to configure your CompTIA Server+ Practice Test (SK0-005). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

CompTIA Server+ SK0-005 Information
The CompTIA Server+ (SK0‑005) certification is tailored for IT professionals aiming to validate their proficiency in installing, managing, securing, and troubleshooting server systems across data center, on‑premises, and hybrid environments. Launched in May 2021, this mid‑level exam comprises up to 90 multiple‑choice and performance‑based questions, to be completed in 90 minutes, and requires a passing score of 750 on a 100–900 scale. Candidates are expected to have approximately two years of hands‑on experience in server environments and should possess foundational knowledge equivalent to CompTIA A+ certification.
The exam covers four core domains: Server Hardware Installation and Management (18%), Server Administration (30%), Security and Disaster Recovery (24%), and Troubleshooting (28%).
The hardware domain includes tasks like racking servers, managing power and network cabling, configuring RAID, and maintaining various drive types, from SSDs to hybrid systems.
The administration domain focuses on OS installation (GUI, core, virtualized, or scripted), network configuration, server roles and virtualization, scripting basics, asset documentation, backup of configurations, and licensing concepts .
Security and disaster recovery encompass server hardening techniques, physical and data security, identity and access management, backup strategies (full, incremental, snapshot), and recovery planning including hot, warm, cold, and cloud-based site setup .
The troubleshooting domain emphasizes systematic problem-solving across hardware, storage, OS and software, network connectivity, and security issues, involving techniques such as diagnostics, log analysis, reseating components, and resolving boot errors or DHCP/DNS issues .
Aspiring candidates should follow a structured preparation plan using official exam objectives to guide their study. Practical experience and familiarity with real-world scenarios—especially using hands-on labs, performance-based exercises, scripting tasks, RAID configuration, virtualization, and disaster recovery setups—can significantly enhance readiness. This targeted strategy helps ensure both technical competence and confidence when tackling the SK0-005 Server+ exam.
Free CompTIA Server+ SK0-005 Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 15
- Time: Unlimited
- Included Topics:Server Hardware Installation and ManagementServer AdministrationSecurity and Disaster RecoveryTroubleshooting
You are logged in as ordinary user "sysops" on a CentOS 9 test server. To reproduce an application crash you export a diagnostic variable:
$ export LD_PRELOAD=/opt/debug/libmalloc.so
The crash only occurs with this variable present, but the binary itself must be executed as root. The sudo policy is still using its default env_reset setting, which normally clears most user-defined variables.
Which single sudo command-line option should you include when you run the binary so that LD_PRELOAD is retained rather than stripped out?
-E
-H
-i
-k
Answer Description
The -E (or --preserve-env) flag tells sudo's security policy to keep the current environment variables, overriding the default env_reset behavior. This allows LD_PRELOAD and any other user-defined variables to be passed to the root process.
-i launches the target user's login shell and initializes a fresh environment, so LD_PRELOAD would be removed.
-H only resets the HOME variable to the target user's home directory and does nothing to preserve other variables.
-k (or --reset-timestamp) merely clears cached sudo credentials; it has no effect on which environment variables are passed to the command.
A systems administrator is investigating a performance problem with a newly racked physical server. Although the server responds to ping, large file transfers crawl and application sessions frequently time out. The network switch port connected to the server is hard-coded to 100 Mbps, full-duplex. In the server's operating system the NIC statistics show a rapidly increasing number of CRC errors and collisions.
Which of the following is the MOST likely cause of the issue?
The network switch port is experiencing a hardware failure.
The server's NIC is set to auto-negotiate speed and duplex.
The server has been configured with incorrect IP address settings.
The server is connected with a faulty patch cable.
Answer Description
The server NIC is still configured for auto-negotiation. When one side of an Ethernet link is manually set to a fixed speed and duplex (100 Mb/s full) and the other side remains on auto-negotiation, the auto-negotiating device cannot determine the duplex mode. Per IEEE 802.3, it therefore defaults to half-duplex, creating a duplex mismatch. The half-duplex side detects many collisions, the full-duplex side records FCS/CRC errors, and bulk traffic slows dramatically even though basic connectivity (ping) still works.
Why the other choices are wrong:
- A faulty patch cable can cause CRC errors, but it does not normally generate high collision counts, the hallmark of a duplex mismatch.
- Incorrect IP settings would prevent or limit connectivity rather than merely slow large transfers while leaving Layer-2 error counters climbing.
- A switch-port hardware failure is more likely to drop the link entirely or flap the interface than to show consistent collision/CRC patterns.
Your organization runs a file share on a rack-mount server that contains eight 2 TB 10 000-RPM SAS drives in a single RAID 6 virtual disk. The array's usable capacity is 12 TB, of which 80 percent is already occupied. Monitoring shows the data set is growing at a steady 4 percent per month. You must ensure the share has enough space to last the next 18 months while keeping the same RAID level and without exceeding the server's 12-drive backplane. During the next maintenance window, what is the minimum number of additional 2 TB disks you must install?
Install two additional 2 TB disks.
Install four additional 2 TB disks.
Install three additional 2 TB disks.
Replace all existing drives with larger-capacity disks and rebuild an eight-drive RAID 6 array.
Answer Description
The array currently holds 9.6 TB of data (12 TB × 0.80). Over 18 months that amount will grow by (1.04)^18 ≈ 2.03, reaching roughly 19.5 TB. A RAID 6 array provides usable capacity equal to (N − 2) × 2 TB, so each extra 2 TB drive adds exactly 2 TB of usable space. To raise usable capacity from 12 TB to at least 19.5 TB requires ⌈(19.5 − 12) / 2⌉ = 4 additional drives, bringing the array to 12 disks and 20 TB of usable space. Adding two or three drives would leave less than 19.5 TB, and a wholesale replacement with larger disks costs more than the minimum solution.
During a hardware refresh, a systems administrator populates the bottom 10 U and the top 8 U of a 42 U cabinet that sits in a raised-floor cold-aisle/hot-aisle layout. All installed 1 U servers draw air in from the front and exhaust out the rear. Soon after the systems come online, temperature probes on the front of the upper servers report inlet temperatures that are 8 °C higher than those recorded at the bottom of the rack, even though the CRAC supply temperature and airflow are within specification. Which rack-level action will BEST restore proper cooling for the affected servers?
Install blanking panels in the empty rack units between the two groups of servers.
Replace the perforated rear door of the cabinet with a solid door to keep exhaust air inside the rack.
Increase the CRAC fan speed to raise static pressure in the cold aisle.
Rotate the top servers so their rear panels face the cold aisle and their fronts face the hot aisle.
Answer Description
The open rack units located between the two groups of equipment allow hot exhaust air to curl back into the cold aisle and be drawn into the intakes of the upper servers. Installing blanking (filler) panels across those unused U-spaces seals the openings so cold supply air is forced through the equipment instead of bypassing it, eliminating recirculation and lowering inlet temperatures. Rotating servers so their rear panels face the cold aisle disrupts the hot-aisle/cold-aisle scheme and is specifically discouraged by industry guidelines. Swapping a perforated rear door for a solid door would trap heat rather than remove it, while increasing CRAC fan speed addresses room-level airflow-not the localized bypass caused by empty rack spaces-and wastes energy without fixing the root cause.
A systems administrator is deploying a new physical server that will function as a web server in the perimeter network (DMZ). The administrator manually configures the network interface with the following settings:
- IP Address: 10.100.100.50
- Subnet Mask: 255.255.255.0
- DNS Server: 8.8.8.8
The administrator confirms that the server can successfully communicate with other servers on the same 10.100.100.0/24 subnet. However, the server is unable to reach the internet to download necessary software updates. Which of the following configuration changes is MOST likely to resolve this issue?
Change the IP address, as it is an invalid APIPA address.
Assign the primary DNS server address to the internal corporate DNS.
Change the subnet mask to 255.255.0.0.
Configure the default gateway address.
Answer Description
The correct answer is to configure the default gateway address. A default gateway is a device, typically a router, that serves as an access point to other networks. When a server needs to send traffic to an IP address outside of its own local subnet, it sends that traffic to its configured default gateway. In this scenario, the server can communicate with devices on its local 10.100.100.0/24 subnet but cannot reach the internet, which is on an external network. This indicates that the server does not have a route to external networks, a problem solved by setting the default gateway address.
- Assigning a different DNS server would not solve the problem. While DNS is required to resolve domain names to IP addresses, the fundamental issue here is network reachability, not name resolution. The server cannot reach any external IP address, which is a routing problem.
- The IP address 10.100.100.50 is a valid private IP address as defined by RFC 1918 and is not an APIPA address. APIPA addresses are in the 169.254.0.0/16 range and are self-assigned when a device cannot contact a DHCP server.
- The subnet mask 255.255.255.0 is a correct and standard mask for a /24 network, which is appropriate for the IP address assigned. It correctly defines the local network boundary.
While investigating why an Internet-facing Linux web server is accepting unexpected connections, you need to quickly determine which TCP ports on the host are reachable from the DMZ and identify the application protocol running on each open port. Intrusive vulnerability or brute-force scripts must be avoided, but speed is more important than stealth. Which Nmap command best meets these requirements?
nmap -sU -sC 203.0.113.25
nmap -O -sS 203.0.113.25
nmap -sV -T4 203.0.113.25
nmap -Pn -p- 203.0.113.25
Answer Description
The goal is to detect open TCP ports and perform service-version identification without launching intrusive NSE scripts. The syntax that accomplishes this is the one that enables Nmap's built-in version-detection engine (-sV) and speeds up the scan with the Aggressive timing template (-T4). No option that launches default scripts (-sC) or vulnerability probes is desired, and there is no requirement for OS fingerprinting, UDP probing, or scanning every port.
nmap -sV -T4 203.0.113.25
: Performs a TCP port scan, interrogates each open port to learn the protocol and banner, and finishes quickly because of the timing template. This satisfies all stated requirements.nmap -O -sS 203.0.113.25
: Adds OS detection and a SYN scan but does not identify application protocols, so it misses the key requirement.nmap -sU -sC 203.0.113.25
: Switches to UDP scanning and launches the default NSE script set, some of which are considered intrusive; it also omits the TCP ports you need to check.nmap -Pn -p- 203.0.113.25
: Scans every port but neither detects services nor accelerates the scan, making it slower than necessary and less informative.
Therefore, nmap -sV -T4 203.0.113.25
is the command that best meets the stated requirements.
While troubleshooting why a Windows Server 2022 member server cannot open files stored at \FS01\Profiles, you verify that DNS resolution and basic connectivity succeed. You suspect that an outdated persistent mapping-cached with incorrect credentials-is blocking access. Which single Command Prompt command lets you view the current mapping and remove it so that a fresh connection can be created?
mountvol \FS01\Profiles /D
netstat -a | find "\FS01\Profiles"
diskpart remove volume \FS01\Profiles
net use \FS01\Profiles /delete
Answer Description
The net use command lists all active SMB drive or UNC mappings and, with the /delete switch, removes a specified mapping. Running
net use \\FS01\Profiles /delete
clears the stale connection and its credentials so the share can be re-mapped. Diskpart is a utility for local disk partitions only, mountvol edits NTFS mount points, and netstat lists network sockets; none of these can enumerate or remove SMB drive mappings.
A data-center VLAN is numbered with the global unicast prefix 2001:db8:22:10::/64. The router's interface on that VLAN is configured as the default gateway 2001:db8:22:10::1/64. You are manually assigning a static IPv6 address to a new application server on this subnet. Which of the following addresses is valid for the server and follows common addressing conventions?
fe80::25/64
2001:db8:22:10::1/64
2001:db8:22::25/64
2001:db8:22:10::25/64
Answer Description
The server must use an address that
- Falls inside the same /64 prefix as the subnet (2001:db8:22:10::/64) and
- Does not duplicate the router's gateway address and
- Is not a special-purpose (e.g., link-local) prefix.
2001:db8:22:10::25/64 satisfies these conditions: it retains the full 64-bit network prefix and chooses a unique host identifier (::25).
Why the other choices are wrong:
- 2001:db8:22:10::1/64 duplicates the gateway address, which would break connectivity.
- fe8025/64 is a link-local address (FE80/10) that is not routed beyond the local link and is never used for static global assignments.
- 2001:db8:22::25/64 is outside the required subnet; missing the "10" hextet places it in a different /64 and the host would not reach the intended gateway.
A system administrator is investigating reports of data corruption on a critical database server. The corruption manifests as subtle, incorrect characters in various database records and does not align with specific user actions or application functions. While storage diagnostics show no disk failures, the server's management logs indicate numerous single-bit memory errors were corrected over the past month, but these corrections are no longer being reported. Which of the following is the MOST likely cause of this data corruption?
Failing ECC memory
Filesystem journaling errors
A zero-day exploit in the database application
Silent data corruption (bit rot) on the storage array
Answer Description
The correct answer is failing ECC memory. Error-Correcting Code (ECC) memory is designed to detect and correct single-bit errors in RAM. The logs showing a high number of corrected errors indicate that a memory module was degrading. The cessation of these logged corrections suggests the module has failed to a point where errors are now multi-bit and uncorrectable, or the ECC function itself has failed. This allows corrupted data to be passed from memory to the CPU and then written to the database, resulting in the type of subtle data corruption described. Silent data corruption on the storage array (bit rot) is less likely because the evidence specifically points to a memory issue. A zero-day exploit is unlikely to cause such random, subtle errors and is not supported by the log evidence. Filesystem journaling errors relate to inconsistencies after a crash, not the type of in-memory data corruption indicated by the ECC error logs.
During a data-center deployment you must write a Bash script that blocks further configuration steps until the HTTPS listener on the same host is accepting connections. The script will run nc -z localhost 443
every five seconds to test the port. Which basic loop construct lets the script automatically keep retrying as long as the command returns a non-zero exit status and then exit the loop immediately when the command succeeds, without extra negation or break
statements?
An
until
loop that surrounds the test command.A
while
loop that surrounds the test command.A
case
statement that evaluates the test command's exit status.A
for
loop that iterates over a fixed sequence of retry counts.
Answer Description
In Bash an until loop executes its body repeatedly while the test command returns a non-zero (false) exit status, and it stops as soon as that command succeeds (exit status 0). Because nc -z localhost 443
fails until the port is open, wrapping the command in an until loop makes the script wait and then continue automatically when the service is ready.
A while loop has the opposite logic (it repeats only while the test command succeeds), so it would loop after the port is already open and exit immediately when the port is closed. A for loop requires a predefined list or counter and would need extra logic to stop at the right time, while a case statement is used for single evaluations rather than continuous polling. Therefore, the until loop is the most appropriate construct.
During a remote-branch deployment, a systems administrator wants to boot a lightweight hypervisor from the server's internal dual Secure Digital (IDSDM) module so that every front-bay SSD can be reserved for production data. Two identical 32 GB microSD cards will be configured in a mirrored pair. Which primary limitation of Secure Digital media must the administrator take into account before approving this design?
The SD module requires a dedicated PCIe RAID card, consuming a scarce expansion slot.
An IDSDM cannot mirror two cards; it only supports single-disk (JBOD) mode.
SD flash has relatively low write-endurance, so frequent writes can wear the cards out quickly.
SD cards draw more than 15 W at peak, so they need a high-current PDU outlet.
Answer Description
Secure Digital cards use low-cost flash that is rated for far fewer program/erase cycles than enterprise-class SSDs. When a hypervisor writes log, scratch, or OS-update data to the boot device, the repeated writes can quickly exhaust the card's limited endurance, leading to corruption or failure-even when two cards are mirrored. Throughput, power draw, hot-swap support, and RAID levels are secondary considerations; the write-cycle lifespan is the factor that most often disqualifies SD cards for anything beyond light, mostly read-only boot duties.
A systems administrator is tasked with applying the latest monthly OS security patches to a critical production server. This server hosts a proprietary financial application from a third-party vendor. The vendor has not yet certified the new OS patches and has stated that their support agreement is only valid for certified configurations. The administrator's primary goals are to maintain security compliance by patching and ensure the application remains stable. Which of the following is the most appropriate next action?
Deploy the patches to a staging server that mirrors the production environment to test for application compatibility.
Withhold the patches from the production server until the vendor officially certifies them.
Apply the patches to the production server during the next scheduled maintenance window.
Request emergency certification of the patches from the application vendor before deployment.
Answer Description
The correct action is to first deploy the patches to a staging environment that mirrors the production server. This approach allows the administrator to test the patches for any conflicts with the proprietary application without affecting the live production system. It is the best way to balance the need for security updates with the risk of causing application instability on a critical system.
Applying patches directly to the production server is too risky without testing, as it could cause an outage of a critical application. Withholding the patches indefinitely leaves the server vulnerable to security threats, which is also an unacceptable risk. Contacting the vendor is a reasonable step, but testing in a staging environment provides actionable data and is a more proactive and immediate step the administrator can take to assess the actual risk.
A systems administrator is alerted that a critical Linux application server is running low on disk space on the root partition. Upon investigation, the administrator finds that a single log file, /var/log/app/application.log
, has grown to over 100 GB. The administrator needs to implement a long-term, automated solution to prevent this from recurring while preserving recent log data for troubleshooting. Which of the following is the MOST effective solution?
Configure a log rotation utility to schedule the regular archiving and compression of the log file.
Write a cron job that runs a script nightly to delete the
application.log
file.Modify the application's configuration to send all log output to
/dev/null
.Provision a larger disk for the server and move the
/var/log
directory to the new partition.
Answer Description
The correct answer is to configure a log rotation utility. Log rotation is the standard and most effective method for managing log files that grow over time. This process automatically renames, compresses, moves, and eventually deletes old log files based on a defined schedule or size threshold. This prevents any single file from consuming all available disk space while preserving recent logs for analysis and compliance. Increasing the disk size is only a temporary fix and does not address the root cause of the uncontrolled log growth. Writing a script to simply delete the file would result in the loss of potentially critical troubleshooting data. Disabling logging entirely by redirecting output to /dev/null
would solve the space issue but is a severe anti-pattern that would make future troubleshooting and security auditing impossible.
A server administrator has implemented several security measures on a new Linux server. A strong UEFI password is in place, and the boot order is locked to the internal drives only. Despite these precautions, the administrator is concerned that an attacker with physical console access could still interrupt the boot process to access a recovery shell and reset the root password. Which of the following security controls would specifically mitigate this risk?
Configure a GRUB password.
Enable full disk encryption.
Implement a chassis intrusion alert in the BIOS.
Set up a host-based intrusion detection system (HIDS).
Answer Description
The correct answer is to configure a GRUB password. GRUB is the bootloader for most Linux distributions. Setting a GRUB password prevents unauthorized users from modifying boot parameters or accessing single-user/recovery modes, which could be used to gain root access.
- Full disk encryption (FDE) is incorrect because, while it protects data at rest if the drive is stolen, it does not prevent an attacker with console access from interrupting the boot process and attempting to access the bootloader menu itself. The bootloader password protects access to these boot-time options.
- A host-based intrusion detection system (HIDS) is incorrect as it operates within the loaded operating system to monitor for threats and is not active during the pre-boot or bootloader stages.
- A chassis intrusion alert is a physical security measure that detects when the server case has been opened. It does not prevent an attacker who already has console access from interacting with the boot process.
A Linux application server was recently configured with a host-based intrusion detection system (OSSEC/Wazuh). Since the change, nightly SCP backup jobs from three designated backup servers fail intermittently with a Connection timed out message. A review of the IDS logs on app01 shows repeated alerts such as:
** Alert 123456789.1234: - syslog,authentication_failed
2025-08-25 02:15:23 (app01) 192.168.50.10->sshd
Rule: 5716 (level 5) - "SSHD scan"
Src IP: 10.20.30.40
Immediately after each alert, active-responses.log records:
/var/ossec/active-response/bin/firewall-drop.sh add - 10.20.30.40 5716
The address 10.20.30.40 is one of the backup servers and uses SSH multiplexing to open many short-lived SCP sessions in parallel. The IDS active-response configuration currently contains:
<active-response>
<command>firewall-drop</command>
<location>local</location>
<rules_id>5712,5716</rules_id>
<timeout>900</timeout>
</active-response>
Which IDS configuration change will BEST allow the legitimate backup traffic to complete while still blocking real brute-force attacks?
Add each backup server's IP address to the IDS <white_list> or <allow_list> directive so Active Response never blocks them.
Reduce the Active Response timeout value from 900 seconds to 60 seconds.
Disable rule IDs 5712 and 5716 in the IDS ruleset.
Increase the Linux
MaxSessions
directive in /etc/ssh/sshd_config from 10 to 50.
Answer Description
The IDS is correctly detecting multiple rapid SSH connections but is misidentifying the parallel SCP sessions from the trusted backup servers as hostile and is executing firewall-drop to block their IP addresses for 15 minutes. Placing the backup servers' IP addresses in the IDS <white_list> (or <allow_list>) tells OSSEC/Wazuh never to apply an active response against those addresses. Alerts may still be generated, preserving visibility, but the backups will no longer be blocked.
- Reducing the timeout (900 → 60 s) would still interrupt the transfer and would only mask the symptom.
- Disabling the rules outright would remove protection against real SSH brute-force attacks from untrusted hosts.
- Increasing the SSH
MaxSessions
parameter affects the SSH daemon, not the IDS, and would not stop the IDS from dropping packets.
Therefore, adding the backup servers to the IDS white/allow list is the most effective and least disruptive fix.
Nice!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.