CompTIA Linux+ Practice Test (XK0-006)
Use the form below to configure your CompTIA Linux+ Practice Test (XK0-006). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

CompTIA Linux+ XK0-006 (V8) Information
CompTIA Linux+ (v8 / XK0-006) Exam
The CompTIA Linux+ (XK0-006) certification is designed for IT professionals who work with Linux systems. It validates skills in system administration, security, scripting, and troubleshooting. This certification is vendor-neutral, covering multiple distributions such as Ubuntu, CentOS, and Red Hat.
Exam Overview
The XK0-006 exam consists of a maximum of 90 questions, including multiple-choice and performance-based questions. Candidates have 90 minutes to complete the test. The exam costs $358 USD. A passing score is 720 on a scale of 100 to 900. The certification is valid for three years and can be renewed through CompTIA’s continuing education program.
Exam Content
The XK0-006 exam focuses on five main domains: system management, security, scripting and automation, troubleshooting, and Linux fundamentals. System management includes package management, system monitoring, and user administration. Security covers permissions, authentication, and encryption. Scripting and automation focus on Bash scripting and task automation. Troubleshooting tests problem-solving skills for system failures and performance issues. Linux fundamentals include file system hierarchy, networking, and command-line operations.
Who Should Take This Exam?
The CompTIA Linux+ certification is ideal for system administrators, Linux support technicians, and DevOps professionals. It is recommended for individuals with at least one year of Linux experience. This certification is beneficial for IT professionals working with servers, cloud infrastructure, and cybersecurity.
How to Prepare
Candidates should review the official CompTIA Linux+ Exam Objectives and study materials provided by CompTIA. Hands-on experience with Linux systems is essential. Practice exams can help assess readiness and identify weak areas. Using Linux in a lab or virtual environment can provide practical experience with commands, system configuration, and troubleshooting.
Summary
The CompTIA Linux+ (XK0-006) certification is a valuable credential for IT professionals working with Linux systems. It validates essential skills in system administration, security, and automation. This certification is ideal for those managing Linux-based environments in IT infrastructure, cybersecurity, and cloud computing.
Free CompTIA Linux+ XK0-006 (V8) Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 15
- Time: Unlimited
- Included Topics:System ManagementServices and User ManagementSecurityAutomation, Orchestration, and ScriptingTroubleshooting
A Linux administrator needs to manage KVM-based virtual machines on a headless server. The administrator must use a command-line tool to interact with the libvirt daemon to list all virtual machines, including those that are currently shut down. Which of the following commands would accomplish this task?
virt-manager --list-all
libvirtd list vms
qemu-img info --all
virsh list --all
Answer Description
The correct command is virsh list --all
. The virsh
command is the main command-line interface for managing guest virtual machines controlled by the libvirt daemon. The list
subcommand, by default, shows only running VMs. To include inactive (shut down) VMs, the --all
option is required.
virt-manager
is a graphical user interface, not a command-line tool, and is unsuitable for a headless server.libvirtd
is the daemon (service) that manages the virtualization platform; it is not a user-facing command-line tool for managing individual VMs.qemu-img
is a command-line tool used for creating, converting, and modifying virtual machine disk images, not for listing or managing the state of VMs.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the libvirt daemon in managing virtual machines?
How is the `virsh` tool different from `virt-manager`?
Why is the `--all` option necessary with the `virsh list` command?
A developer needs to merge two configuration files, app.conf.old
and app.conf.new
, into a single file named app.conf
. They want to interactively review the differences side-by-side and choose which version of each differing line to include in the final output file. Which of the following commands will achieve this?
cp app.conf.old app.conf && sdiff --in-place app.conf app.conf.new
sdiff app.conf.old app.conf.new | merge app.conf
diff -u app.conf.old app.conf.new > app.conf
sdiff -o app.conf app.conf.old app.conf.new
Answer Description
The correct command is sdiff -o app.conf app.conf.old app.conf.new
. The sdiff
command is used to compare two files side-by-side. The -o
or --output
option is specifically used to create a third file by interactively merging the two source files. When sdiff
is run with this option, it prompts the user to choose which version of a differing line to keep. The command diff -u app.conf.old app.conf.new > app.conf
creates a unified diff patch and redirects it to a file, but it is not an interactive merging process. The command sdiff app.conf.old app.conf.new | merge app.conf
is incorrect as merge
is not a standard command for this purpose and sdiff
does not output to a pipe in a format suitable for another program to merge interactively. The command cp app.conf.old app.conf && sdiff --in-place app.conf app.conf.new
is incorrect because sdiff
does not have an --in-place
option.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the `sdiff` command in Linux?
How does the `-o` option in `sdiff` work?
Why doesn’t `diff -u app.conf.old app.conf.new > app.conf` work for interactive merging?
A Linux administrator maintains a Bash script that parses a 10 GB web-server log. Profiling with time
and strace
shows that almost all CPU time is spent inside a while read line; do …; done
loop that uses only simple shell built-ins. The administrator consults an AI code assistant for ways to make the script run much faster with minimal changes. Which recommendation would provide the GREATEST reduction in run time?
Use the Bash builtin
mapfile -t
to load the file into an array before processing it in the script.Run the script with
nice -n -5
to give it higher CPU scheduling priority.Remount the filesystem containing the log with the
noatime
option before running the script.Replace the entire
while read
loop with a singleawk
command that performs the same parsing in one pass.
Answer Description
Refactoring the loop into a single awk
one-liner yields the largest speed-up. awk
is a compiled C program that reads the file once and performs pattern and field processing internally, avoiding the per-line overhead of Bash's read
builtin and loop control. Switching to mapfile
still requires per-line Bash processing and can exhaust memory on a 10 GB file. Changing CPU priority (nice
) or disabling atime updates affects scheduling or disk metadata writes but does not eliminate the algorithmic bottleneck in the shell loop.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What makes 'awk' faster than a Bash 'while read' loop?
What is the limitation of using 'mapfile -t' with a large file?
Why doesn’t changing CPU priority with `nice` significantly improve performance?
A system administrator needs to quickly document the storage layout of a new Linux server. They require a command that displays all block devices, including physical disks, partitions, and logical volumes, in a clear, hierarchical format to easily visualize their relationships. Which of the following commands is the most suitable for this specific task?
lsblk
fdisk -l
parted -l
blkid
Answer Description
The correct command for this scenario is lsblk
. The lsblk
command is designed to list block devices in a tree-like format, which clearly shows the hierarchical relationship between disks, the partitions on those disks, and any logical volumes or RAID arrays built upon them. blkid
is used to display attributes of block devices, such as their UUIDs and filesystem types, but it does not show the hierarchical structure. fdisk -l
and parted -l
are used to display partition tables but do not present the information in the intuitive, nested tree structure that lsblk
provides, making it harder to visualize complex storage setups like those involving LVM.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What additional details does the `lsblk` command provide?
How does `blkid` differ from `lsblk` in functionality?
When would you use `fdisk -l` or `parted -l` instead of `lsblk`?
During a security-hardening exercise, you are asked to configure a service account's Bash environment so that users can still recall commands in the current session, but no history is ever written to ~/.bash_history
when they log out. Which Bash environment variable should you unset or set to an empty value in the account's shell profile to meet this requirement?
HISTFILESIZE
HISTSIZE
HISTCONTROL
HISTFILE
Answer Description
Bash saves the in-memory history list to the file named by the HISTFILE
variable when the shell exits. If HISTFILE
is unset or set to a null value (for example, export HISTFILE=""
or export HISTFILE=/dev/null
), the write operation is skipped, so nothing is stored on disk, while the history list remains available during the session.
HISTSIZE
limits how many commands are kept in memory; setting it has no effect on whether the file is written. HISTCONTROL
only filters what gets saved, and HISTFILESIZE
limits how many lines are retained in the file once it is written. None of those variables stop Bash from opening or updating the history file, so they do not satisfy the requirement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does the HISTFILE variable control in Bash?
How is HISTSIZE different from HISTFILE?
What is the purpose of the HISTCONTROL variable in Bash?
A systems administrator is configuring secure remote access for a user who only needs to transfer files. The administrator modifies the /etc/ssh/sshd_config
file, setting Subsystem sftp
to internal-sftp
and adding a Match User
block with a ChrootDirectory
directive pointing to the user's home directory (%h
). After restarting the SSH daemon, the user's SFTP connection is immediately closed after successful authentication. Which of the following is the most likely cause for this failure?
The
sftp-server
binary has not been copied into the chroot jail.The user's assigned shell,
/bin/bash
, is not present within the chroot jail.A firewall is blocking incoming connections for the SFTP service.
The ownership and permissions on the
ChrootDirectory
path are incorrect.
Answer Description
The correct answer is that the ChrootDirectory
path and all of its parent components must be owned by the root user and not be writable by any other user or group. This is a strict security requirement of the OpenSSH daemon to prevent the chrooted user from breaking out of the jail. If /home/user
is designated as the chroot directory, both /home
and /home/user
must be owned by root
. Since a user typically needs write access to their own home directory, the standard practice is to create a subdirectory inside the home directory (e.g., /home/user/upload
) owned by the user, while the home directory itself remains owned by root. The internal-sftp
subsystem does not require a separate sftp-server
binary or a user shell like /bin/bash
to be present within the jail. Firewall rules would typically prevent the initial connection, not cause a disconnect after authentication.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why must the ChrootDirectory and its parent directories be owned by root?
What is the purpose of setting 'Subsystem sftp' to 'internal-sftp'?
How does creating a subdirectory inside the home directory improve security?
An administrator needs to capture only HTTPS traffic for later analysis. The capture must:
- Listen on interface eth0
- Record only TCP traffic whose source or destination port is 443
- Disable hostname and service-name resolution to reduce overhead
- Store the packets in a file named capture.pcap rather than printing them to the screen
Which tcpdump command satisfies all of these requirements?
tcpdump -i eth0 -n tcp port 443 -w capture.pcap
tcpdump -i eth0 -nn tcp port 443 -w capture.pcap
tcpdump -i eth0 -nn udp port 443 -w capture.pcap
tcpdump -i eth0 -nn tcp port 443 -r capture.pcap
Answer Description
The command "tcpdump -i eth0 -nn tcp port 443 -w capture.pcap" meets every stated need: "-i eth0" selects the correct interface, "-nn" turns off both DNS hostname and service-name resolution, the filter expression "tcp port 443" limits the capture to HTTPS traffic in either direction, and "-w capture.pcap" writes the raw packets to the specified file.
A command that uses only "-n" still resolves service names, violating the requirement. The choice that specifies "udp port 443" captures the wrong protocol, and the command that employs "-r" reads from an existing file instead of writing live traffic, so it would not record new packets.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does the '-nn' option in tcpdump do?
Why is TCP traffic specified with 'tcp port 443' when capturing HTTPS traffic?
What is the purpose of the '-w' flag in the tcpdump command?
Your DevOps team uses AI-assisted generators to create Python helper scripts for Linux automation. To meet the code-linting best practice and prevent any commit that violates PEP 8 or introduces obvious logical errors from reaching the shared repository, you need an automated safeguard that still lets developers review their changes locally. Which approach BEST satisfies this requirement?
Use git rebase with the --squash option to condense AI-generated commits before merging.
Require all commits to be GPG-signed so the author can be verified by reviewers.
Enable Git Large File Storage (LFS) for the repository and review code style after each push.
Configure a Git pre-commit hook that runs pylint and black, blocking the commit if violations are detected.
Answer Description
Git supports a pre-commit hook that runs automatically before each commit is recorded. By wiring this hook to tools such as pylint-which checks PEP 8 style and detects likely coding mistakes-and a formatter like black, the commit is rejected until the problems are fixed. This forces developers (including those using AI-generated code) to verify and correct output before the code leaves their workstation, fulfilling the exam objective of code linting and the AI guideline of verify output. Enabling Git LFS only changes how large files are stored; it does not analyze code. Rebasing/squashing tidies history but does not lint. GPG signing confirms authorship but performs no style or logic checks.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Git pre-commit hook?
What does pylint and black do in code linting?
What is PEP 8 and why is it important?
A development team is evaluating whether a new backup utility qualifies as free software under the Free Software Foundation (FSF) definition. The utility's license permits commercial and non-commercial redistribution of unchanged binaries, but it forbids sharing modified versions unless the original author grants written permission. According to the FSF's four essential software freedoms, which specific freedom does this restriction violate, causing the utility to fail the free-software test?
The freedom to distribute copies of your modified versions to others.
The freedom to study how the program works and modify it for private use.
The freedom to run the program as you wish, for any purpose.
The freedom to redistribute unchanged copies of the program to anyone.
Answer Description
The FSF states that a program is free software only when users possess all four essential freedoms. Freedom 3 is the right "to distribute copies of your modified versions to others," so the community can benefit from improvements. A license that blocks distribution of modified code denies this freedom and therefore renders the software non-free.
- Freedom 0 is unaffected because the program can still be run for any purpose.
- Freedom 1 remains intact if the source can be studied and altered privately.
- Freedom 2 is satisfied because unmodified copies may be redistributed.
- Freedom 3, however, is revoked by the license's prohibition on sharing modified versions, so the software fails to meet the FSF definition of free software.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the four essential freedoms defined by the FSF?
Why does restricting the distribution of modified versions violate FSF's definition of free software?
What is the difference between 'free software' and 'open source software'?
You have booted a Linux server into single-user (rescue) mode after an unexpected power loss. Before you run fsck on the root file system (/), you must switch that file system to read-only without changing its mount point or rebooting. Which single command accomplishes this task?
mount -o remount,nosuid /
mount -o remount,ro /
mount -o bind,ro / /
mount -o remount,rw /
Answer Description
The remount option tells the kernel to change the mount options of an already-mounted file system in place. Adding ro sets the mount to read-only. Therefore, mount -o remount,ro /
immediately flips the root file system to read-only while it remains mounted at /.
The other choices are incorrect:
mount -o bind,ro / /
attempts a bind mount; it would create a second mount of the same directory and is not how you change an existing mount's permissions.mount -o remount,nosuid /
changes only the nosuid flag; the file system would still be read-write.mount -o remount,rw /
explicitly keeps the file system read-write, the opposite of what is required.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does the 'remount' option do in the mount command?
What is the purpose of single-user mode in Linux?
Why must the root file system be switched to read-only before running fsck?
Users report that a ticket-tracking web application hosted on a single Linux server becomes noticeably sluggish during peak usage. While the slowdown is in progress, you capture the following statistics on the 4-core host:
top (14:18:07)
%Cpu(s): 3.7 us, 1.1 sy, 0.0 ni, 0.1 id, 94.6 wa, 0.5 hi, 0.0 si, 0.0 st
iostat -dx 5
Device: r/s w/s rkB/s wkB/s rrqm/s wrqm/s %util
nvme0n1 18.4 773.2 962.7 7879.3 0.0 0.1 99.4
vmstat 5
procs -----------memory--------- ---swap-- -----io---- --system-- -----cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
1 3 0 12472 13004 2219644 0 0 130 1702 1020 2020 4 1 0 94 0
Which change is most likely to improve the page-load latency without modifying application code?
Aggregate two NICs in an 802.3ad (LACP) bond to increase available network bandwidth.
Lower the primary interface's MTU from 1500 to 1400 bytes to avoid packet fragmentation.
Raise the vm.swappiness value so the kernel pages inactive memory sooner.
Move the application's database and log files to faster storage (for example, an SSD array or dedicated NVMe device).
Answer Description
The wa
value in top (94 %) shows the CPUs are mostly waiting for I/O to complete, not running useful work. At the same moment, iostat reports the primary block device at 99 % %util, which the iostat
manual states indicates device saturation for a serial device. Together these numbers point to the storage layer as the bottleneck that is delaying transactions and causing slow application responses. Moving the database and log files to storage with lower latency (for example, an SSD or dedicated NVMe device) reduces I/O wait time and should restore normal responsiveness.
Raising vm.swappiness alters paging behavior, but the host is not swapping (si/so
are 0). Bonding NICs or shrinking the MTU address network throughput and fragmentation, yet none of the captured metrics suggest a network problem-CPU time is spent waiting on disk, not on sockets. Therefore, replacing or accelerating the storage subsystem is the action most likely to relieve the slowdown.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the significance of the 'wa' (I/O wait) value in the `top` command output?
In the `iostat` output, what does the %util value of 99.4 indicate?
Why is raising the vm.swappiness value not a solution in this scenario?
Your monitoring team reports that large HTTPS file transfers keep failing after traffic is routed through a site-to-site IPSec VPN. A packet capture shows the server sending 1500-byte Ethernet frames while the VPN gateway returns ICMP "fragmentation needed" messages. Before changing any interface settings, you need to discover the largest packet that can traverse the entire path without fragmentation from the Linux server. Which command will let you verify that maximum unfragmented size?
ethtool -S eth0 | grep drop
to look for oversized-frame countersss -lnt | awk '{print $2}'
to view MSS values on listening socketsip link set dev eth0 mtu 1400 && ping -c 4 203.0.113.10
to see if packets pass afterwardping -c 4 -M do -s 1472 203.0.113.10
and adjust the -s value downward until the echoes succeed
Answer Description
The ping command with the options -M do
(set DF flag, do not allow fragmentation) and -s
(payload length) is the standard way to probe Path-MTU from a Linux host. Starting with a 1472-byte payload (1500-byte MTU minus 20-byte IP and 8-byte ICMP headers) and decreasing the size until you receive replies tells you the exact maximum size that can pass the VPN tunnel. The other choices either modify the local interface MTU (which masks, not measures, the problem), display driver statistics, or list TCP listeners-none of which determines the path MTU between two endpoints.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the DF (Don't Fragment) flag in the ping command?
What is Path MTU, and why is it important in networking?
How is the payload size calculated for the ping command when testing Path MTU?
A junior administrator writes the following line in a Bash script to capture the first log file created on the current date that is stored inside a compressed archive:
BACKUP_DIR=`tar -tzf backup.tar.gz | grep "\`date +%F\`" | head -1`
The script fails with a "command not found" error because the inner backticks are parsed too early. Which rewritten command follows POSIX-compliant best practice for nested command substitution and eliminates the escaping problem?
BACKUP_DIR=
tar -tzf backup.tar.gz | grep "$(date +%F)" | head -1
BACKUP_DIR=$(tar -tzf backup.tar.gz | grep
date +%F
| head -1)BACKUP_DIR=
tar -tzf backup.tar.gz | grep "\
date +%F`" | head -1 | tr -d '\n'`BACKUP_DIR=\((tar -tzf backup.tar.gz | grep "\)(date +%F)" | head -1)
Answer Description
The modern, POSIX-compliant form of command substitution is $( … ). It may be freely nested and does not require the awkward backslash-escaping that backticks need. Rewriting the command so that both the outer substitution and the inner date command use $( … ) resolves the parsing error and greatly improves readability:
BACKUP_DIR=$(tar -tzf backup.tar.gz | grep "$(date +%F)" | head -1)
Using backticks on the outside still leaves the legacy syntax in place, and mixing quoting with backticks provides no advantage. Keeping backticks inside the $( … ) form (or vice-versa) re-introduces the same nesting problem. Adding an extra pipeline with tr -d '\n'
does not fix the underlying quoting issue and alters the data unexpectedly. Therefore, the variant that converts all substitutions to $( … ) and keeps the inner date expression inside double quotes is the only answer that meets best-practice guidance and works in any modern POSIX shell.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is $(...) preferred over backticks in POSIX scripts?
What does the command 'tar -tzf' do in this script?
Why is '$(date +%F)' used in the script?
An administrator is migrating an iptables DNAT rule to nftables. The existing rule is
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to-destination 192.168.1.20:8443
The ip family nat table and its prerouting
chain already exist. Which single nft command recreates the same destination NAT behavior?
nft add rule ip filter input iif "eth0" tcp dport 443 redirect to 192.168.1.20:8443
nft add rule ip nat postrouting oif "eth0" tcp dport 443 dnat to 192.168.1.20:8443
nft add rule ip nat prerouting oif "eth0" tcp dport 443 snat to 192.168.1.20:8443
nft add rule ip nat prerouting iif "eth0" tcp dport 443 dnat to 192.168.1.20:8443
Answer Description
Destination NAT rules must be placed in a nat chain that is hooked at prerouting so the kernel can translate the destination address before routing. The correct syntax begins with nft add rule
, specifies the ip family, the nat table, the prerouting chain, then matches packets arriving on eth0
with TCP destination port 443 and applies dnat to
the internal host and port.
The other choices fail for different reasons:
- A rule in postrouting executes after routing decisions and is suited for SNAT or masquerade, not DNAT.
- Using snat changes the source address, not the destination, so it would not forward traffic to the backend server.
- A rule in the filter table with redirect would only forward traffic to a local port on the firewall and never reach 192.168.1.20.
Therefore, the first command is the only one that duplicates the original iptables DNAT behavior.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is DNAT in nftables?
Why is prerouting used for DNAT in nftables?
How does nftables differ from iptables when configuring DNAT?
A systems administrator on a Debian-based server discovers that the package management system is in a broken state after a software installation was abruptly terminated. When attempting to install new packages using apt
, the system reports errors related to unmet dependencies. Which of the following commands should the administrator run to attempt to correct the broken dependencies and fix the package cache?
sudo apt-get autoremove
sudo apt --fix-broken install
sudo dpkg --configure -a
sudo apt-get update
Answer Description
The correct command is sudo apt --fix-broken install
. This option for the apt
command is specifically designed to find and fix broken package dependencies on the system. It works by identifying missing dependencies and installing them, or by removing packages that are causing irreparable dependency conflicts.
sudo apt-get autoremove
is incorrect because it is used to remove packages that were automatically installed to satisfy dependencies for other packages and are now no longer needed. It does not fix broken installations.sudo apt-get update
is also incorrect. This command only synchronizes the local package index files with the remote repositories; it does not install, upgrade, or fix any packages.sudo dpkg --configure -a
is a plausible but incorrect choice in this primary step. This command configures all unpacked but unconfigured packages. While it can resolve some issues,apt --fix-broken install
is the higher-level and more direct command for resolving dependency issues.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does the `apt --fix-broken install` command do?
What is the difference between `apt` and `dpkg` in package management?
When should you use `apt-get autoremove`?
Nice!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.