CompTIA Linux+ Practice Test (XK0-006)
Use the form below to configure your CompTIA Linux+ Practice Test (XK0-006). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

CompTIA Linux+ XK0-006 (V8) Information
CompTIA Linux+ (v8 / XK0-006) Exam
The CompTIA Linux+ (XK0-006) certification is designed for IT professionals who work with Linux systems. It validates skills in system administration, security, scripting, and troubleshooting. This certification is vendor-neutral, covering multiple distributions such as Ubuntu, CentOS, and Red Hat.
Exam Overview
The XK0-006 exam consists of a maximum of 90 questions, including multiple-choice and performance-based questions. Candidates have 90 minutes to complete the test. The exam costs $358 USD. A passing score is 720 on a scale of 100 to 900. The certification is valid for three years and can be renewed through CompTIA’s continuing education program.
Exam Content
The XK0-006 exam focuses on five main domains: system management, security, scripting and automation, troubleshooting, and Linux fundamentals. System management includes package management, system monitoring, and user administration. Security covers permissions, authentication, and encryption. Scripting and automation focus on Bash scripting and task automation. Troubleshooting tests problem-solving skills for system failures and performance issues. Linux fundamentals include file system hierarchy, networking, and command-line operations.
Who Should Take This Exam?
The CompTIA Linux+ certification is ideal for system administrators, Linux support technicians, and DevOps professionals. It is recommended for individuals with at least one year of Linux experience. This certification is beneficial for IT professionals working with servers, cloud infrastructure, and cybersecurity.
How to Prepare
Candidates should review the official CompTIA Linux+ Exam Objectives and study materials provided by CompTIA. Hands-on experience with Linux systems is essential. Practice exams can help assess readiness and identify weak areas. Using Linux in a lab or virtual environment can provide practical experience with commands, system configuration, and troubleshooting.
Summary
The CompTIA Linux+ (XK0-006) certification is a valuable credential for IT professionals working with Linux systems. It validates essential skills in system administration, security, and automation. This certification is ideal for those managing Linux-based environments in IT infrastructure, cybersecurity, and cloud computing.

Free CompTIA Linux+ XK0-006 (V8) Practice Test
- 20 Questions
- Unlimited
- System ManagementServices and User ManagementSecurityAutomation, Orchestration, and ScriptingTroubleshooting
On a Debian-based server, you are tasked with decommissioning the local account named charlie. The security policy states that the account and its home directory (including the mail spool) must be deleted immediately, no backup should be kept, and any files charlie owns that are located outside /home/charlie - such as /srv/projects/ - must remain untouched. Which single deluser command meets all of these requirements?
deluser --remove-home charlie
deluser charlie
deluser --backup-to /srv/archive charlie
deluser --remove-all-files charlie
Answer Description
The invocation deluser --remove-home charlie removes the user account and deletes only the user's home directory and mail spool, leaving every other file the user owns elsewhere on the filesystem intact, and it performs no backup. Running deluser charlie without options leaves the home directory behind, so policy is not met. Using --remove-all-files eradicates every file owned by charlie, including those in /srv/projects/, which violates the requirement to preserve project data. The command with --backup-to implicitly enables the backup feature, creating an archive before deletion and therefore breaks the "no backup" rule.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between --remove-home and --remove-all-files in the deluser command?
What happens if you use deluser without any options?
Why does the --backup-to option violate the stated security policy?
A Linux system administrator is troubleshooting a server that has a significantly longer boot time than expected. The administrator needs to identify the individual services or units that are taking the most time to initialize during the boot process to pinpoint the source of the delay. Which of the following commands would be the MOST effective for this specific task?
systemctl list-units --state=failed
systemd-analyze blame
top
systemd-analyze critical-chain
Answer Description
The correct command is systemd-analyze blame. This command displays a list of all running units, sorted in descending order by the time they took to initialize. This allows an administrator to quickly identify the services that consumed the most time during the boot process.
systemd-analyze critical-chainis incorrect because it shows the chain of dependencies that are critical to the boot process, rather than a simple list of the slowest individual units.systemctl list-units --state=failedis incorrect as it only lists units that failed to start, not units that started successfully but were slow.topis incorrect because it is a real-time process monitoring tool used to view system resource usage after the system has booted, not to analyze the boot process itself.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does 'systemd-analyze blame' specifically analyze?
How is 'systemd-analyze critical-chain' different from 'systemd-analyze blame'?
When should you use 'systemctl list-units --state=failed' during troubleshooting?
A system administrator is configuring a service on a Linux server and needs to quickly find all network IP addresses associated with the system's hostname. Which of the following commands will accomplish this task?
hostname -fhostname -Ihostname -ahostname -d
Answer Description
The correct command is hostname -I. The -I or --all-ip-addresses option is used to display all network addresses for the host.
- The
hostname -fcommand is incorrect as it displays the Fully Qualified Domain Name (FQDN), not the IP address. - The
hostname -dcommand is incorrect because it displays the DNS domain name. - The
hostname -acommand is incorrect as it displays the host's alias name, if one is configured.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does the `hostname -I` command do?
What is the difference between `hostname -I` and `hostname -f`?
What is an FQDN and why is it different from an IP address?
You are preparing to start two containers with Podman on a RHEL server where SELinux is in enforcing mode. Both containers must read from and write to the same host directory /srv/data, which should be mounted at /data inside each container. You want SELinux to relabel the directory automatically so that it can be shared among multiple containers without disabling confinement. Which volume-mount option accomplishes this requirement?
Append :Z to the -v argument (for example, -v /srv/data:/data:Z)
Append :shared to the -v argument (for example, -v /srv/data:/data:shared)
Append :z to the -v argument (for example, -v /srv/data:/data:z)
Use no additional option; SELinux automatically permits shared bind mounts
Answer Description
The :z suffix on a bind-mount or -v option tells the container engine to relabel the host path with the shared container_file_t type and no unique MCS category. Any container that mounts the same path with :z will then have SELinux permission to access it, satisfying the requirement that multiple containers share the directory. The :Z suffix applies a private, unique MCS label that prevents other containers from using the directory. The :shared flag relates to mount-propagation, not SELinux contexts, and leaving the mount unlabeled typically causes SELinux to deny access when the second container tries to use the directory.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does the :z option do in Podman with SELinux enforcing mode?
How does the :Z option differ from :z in Podman volume mounts?
What is the purpose of SELinux in container environments like RHEL with Podman?
A Linux administrator is configuring a server that uses an external USB drive for daily backups. The drive is listed in /etc/fstab to be mounted automatically at boot. To prevent boot failures when the USB drive is not connected, which mount option should the administrator add to the corresponding entry in /etc/fstab?
nofail
remount
nodev
auto
Answer Description
The correct option is nofail. This option instructs the system not to report errors for this device if it does not exist, allowing the boot process to continue without interruption. The auto option, which is a default setting, will cause the system to attempt to mount the filesystem at boot, which can lead to delays or entry into emergency mode if the device is not present. The remount option is used to change the mount properties of an already mounted filesystem and is not used for initial mounting at boot. The nodev option is a security measure that prevents the interpretation of character or block special devices on the filesystem and does not relate to handling mount failures.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does the 'nofail' option do in /etc/fstab?
How does the 'auto' option in /etc/fstab differ from 'nofail'?
When should the 'nodev' option be used in /etc/fstab?
You asked a large-language model to generate a Kubernetes Deployment manifest for an intranet web service. The first response works, but it uses the deprecated extensions/v1beta1 API version and does not set any CPU or memory limits. According to prompt-engineering best practices referenced in the Linux+ objective on AI, which follow-up prompt is most likely to guide the model toward a corrected, production-ready manifest with the least trial-and-error?
"Please update your previous manifest by replacing
extensions/v1beta1withapps/v1and adding reasonable CPU and memory limits to every container, then return the complete YAML.""Fix anything wrong in the manifest you just wrote."
"Regenerate the manifest and make it fully production-ready."
"Reset this conversation and give me a fresh Kubernetes Deployment manifest instead."
Answer Description
Effective prompt engineering is iterative: you supply the model with concrete feedback about what was wrong and instruct it to revise the earlier result. The option that names the two specific defects (deprecated API version and missing resource limits) and directs the model to return an updated manifest gives the model the contextual information it needs and avoids vague language. Simply regenerating without details, starting a new chat, or telling the model to "fix anything wrong" lacks the clarity and specificity needed for reliable improvement.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between `extensions/v1beta1` and `apps/v1` in Kubernetes?
Why are CPU and memory limits important for Kubernetes containers?
What are prompt-engineering best practices when working with AI for technical tasks?
While deploying two Docker containers on a CentOS Stream 9 host with SELinux in enforcing mode, you bind-mount the same host directory into each container with the option:
-v /srv/shared-data:/data:Z
The first container can create and read files under /data, but the second container receives "Permission denied" errors when it tries to access those files. Setting SELinux to permissive mode makes the error disappear, but you must keep SELinux enforcing. Which change to the volume specification will allow both containers to share the directory without disabling SELinux?
Mount the directory with the suffix ":z" instead of ":Z" ( -v /srv/shared-data:/data:z ).
Add the option --security-opt label=disable to each container.
Start both containers with the --privileged flag so SELinux no longer blocks the access.
Keep ":Z" but append the read-only flag ( -v /srv/shared-data:/data:Z,ro ).
Answer Description
The ":Z" suffix tells Docker (or Podman) to relabel the mount point with a private SELinux label that is unique to the first container. When a second container tries to use that path, its processes carry a different MLS/MCS label and SELinux blocks access. Replacing ":Z" with ":z" applies a shared label (container_file_t without categories), allowing any container to read and write the content while SELinux remains enforcing. Using --privileged or label=disable would also bypass the denial, but those options remove most of the confinement and are not the recommended or least-privilege fix. Adding :ro would prevent writes and still use the private label, so the problem would persist.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What do the ':Z' and ':z' options mean in Docker?
Why doesn’t adding ':Z,ro' fix the permission issue in this scenario?
What is the impact of using '--privileged' or '--security-opt label=disable'?
While troubleshooting an application on a CentOS Stream server, you run the command:
# getenforce
Permissive
Which statement best describes what this status means for SELinux on the host?
SELinux is operating with the targeted policy so only selected daemons are confined.
SELinux is actively denying operations that violate the loaded security policy.
SELinux is not blocking any actions but still logs every policy violation for auditing purposes.
SELinux is completely disabled; no policy is loaded and no AVC messages are generated.
Answer Description
The getenforce utility reports the current SELinux enforcement mode. When it returns "Permissive", the SELinux policy is loaded and every attempted access is still checked against that policy, but violations are only logged (as AVC messages) rather than blocked. In contrast, "Enforcing" would actively deny disallowed operations, "Disabled" would unload the SELinux module entirely and stop both labeling and logging, and the term "targeted" refers to a specific policy type rather than an enforcement mode. Therefore, the only accurate description of the Permissive state is that enforcement is turned off while auditing remains active.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is SELinux and why is it used?
What are AVC messages mentioned in SELinux logging?
How does the SELinux 'Permissive' mode differ from 'Enforcing' and 'Disabled' modes?
A system administrator needs to update a configuration file named app.conf. They must replace every instance of the old IP address 192.168.1.5 with the new IP address 10.10.0.5 within the file. The changes need to be saved directly back into app.conf. Which of the following commands will accomplish this task correctly?
sed -i 's/192.168.1.5/10.10.0.5/g' app.confsed 's/192.168.1.5/10.10.0.5/g' app.conf > app.confsed 's/192.168.1.5/10.10.0.5/g' app.confsed -i 's/192.168.1.5/10.10.0.5/' app.conf
Answer Description
The correct command is sed -i 's/192.168.1.5/10.10.0.5/g' app.conf. Here is a breakdown of the command:
sedis the stream editor command.- The
-ioption stands for "in-place" and modifies the file directly. 's/old/new/g'is the substitution command.sindicates the substitute command./192.168.1.5/is the pattern to search for./10.10.0.5/is the replacement string.gis the global flag, which ensures all occurrences on each line are replaced, not just the first one. The commandsed 's/192.168.1.5/10.10.0.5/g' app.confis incorrect because it lacks the-iflag; it will only print the modified text to standard output and will not save the changes to the file. The commandsed 's/192.168.1.5/10.10.0.5/g' app.conf > app.confis incorrect and dangerous. The shell processes the redirection>by truncating the output file (app.conf) before thesedcommand runs. This results insedreading an empty file and the original content being lost. The commandsed -i 's/192.168.1.5/10.10.0.5/' app.confis incorrect because it is missing the globalgflag. It would only replace the first occurrence of the IP address on any given line.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the `-i` flag in the `sed` command?
What does the `g` flag do in the `sed` command?
Why is `sed 's/old/new/g' file > file` a dangerous command?
After cloning a Debian-based server to a replacement disk, the first reboot halts with an initramfs emergency prompt reporting that the kernel cannot mount the root filesystem on the UUID supplied in the root= parameter. Investigation shows that GRUB2 (installed in legacy BIOS/MBR mode) is still passing the old UUID to the kernel. The administrator wants to make the new root UUID persist across future kernel updates. Which configuration file should be modified before running update-grub so the correct root= value is passed to the kernel at every boot?
/etc/fstab
/boot/grub/grub.cfg
/etc/default/grub
/etc/grub.d/40_custom
Answer Description
GRUB2 stores the menu that the firmware actually reads in /boot/grub/grub.cfg, but on Debian-based systems that file is regenerated automatically from templates each time update-grub (or a kernel package) is run. Persistent kernel command-line changes are therefore made in /etc/default/grub, usually by editing the GRUB_CMDLINE_LINUX (or GRUB_CMDLINE_LINUX_DEFAULT) variable and then regenerating the menu with update-grub. Editing /boot/grub/grub.cfg directly will work only until the next update overwrites it. The script /etc/grub.d/40_custom is meant for adding entirely new menu entries, not for changing the parameters of the distribution-generated entries. /etc/fstab influences how filesystems are mounted after the kernel has already mounted its root filesystem, so changing it will not fix a root= mismatch during early boot.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the GRUB_CMDLINE_LINUX variable in /etc/default/grub?
Why shouldn't you edit /boot/grub/grub.cfg directly?
What role does the UUID play in the root= parameter in GRUB2?
A systems administrator is analyzing a vulnerability scan report for a production web server. The report lists the following vulnerabilities found on the system. Based on the Common Vulnerability Scoring System (CVSS), which vulnerability should be prioritized for immediate remediation?
An information disclosure vulnerability in the SSH service with a CVSS score of 5.3.
A cross-site scripting (XSS) vulnerability in the web server's administration interface with a CVSS score of 7.2.
A remote code execution vulnerability in a web application framework with a CVSS score of 9.8.
A denial-of-service vulnerability in the kernel's TCP/IP stack with a CVSS score of 7.5.
Answer Description
The Common Vulnerability Scoring System (CVSS) is a standardized framework used to rate the severity of software vulnerabilities. It assigns a numerical score from 0.0 to 10.0, where a higher score indicates a more severe vulnerability. When prioritizing remediation efforts, administrators should address the vulnerability with the highest CVSS score first. In this scenario, the remote code execution (RCE) vulnerability has a CVSS score of 9.8, which falls into the "Critical" severity range (9.0-10.0). This type of vulnerability poses the most significant threat and must be addressed immediately. The other vulnerabilities have lower scores, indicating they are of 'High' (7.0-8.9) or 'Medium' (4.0-6.9) severity, and should be handled after any critical issues are resolved.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the Common Vulnerability Scoring System (CVSS)?
Why is a Remote Code Execution (RCE) vulnerability considered critical?
What steps should an administrator take after identifying a critical vulnerability?
A system administrator is troubleshooting a log rotation failure for the file /var/log/syslog. The rotation script cannot access the file because it is held open by a running process. Which of the following commands will most effectively identify the command, process ID (PID), and user associated with the process that has /var/log/syslog open?
lsof /var/log/syslogstat /var/log/syslogps -ef | grep /var/log/syslogfile /var/log/syslog
Answer Description
The correct command is lsof /var/log/syslog. The lsof (List Open Files) command is used to display information about files that are open by various processes. When given a file path as an argument, it will list all processes that have that specific file open, along with details like the PID, user, and command name. The ps -ef | grep /var/log/syslog command is incorrect because ps lists running processes, but grepping for a file path is not a reliable way to find which process has a file open, as the file path is not typically part of the process's command line information. The file /var/log/syslog command is used to determine the type of a file (e.g., ASCII text, binary), not which process is using it. The stat /var/log/syslog command displays detailed file metadata like size, permissions, and timestamps, but does not show process-related information.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does the `lsof` command do?
Why is `ps -ef | grep` not effective for finding processes holding files?
What is the difference between `file`, `stat`, and `lsof` commands?
A Linux administrator wants to use a public generative AI chat service to refactor an Ansible playbook that will later configure production servers containing regulated customer data. Which approach BEST follows responsible-AI and data-governance best practices in this situation?
Turn off the AI tool's safety filters to avoid incomplete answers and trust the model's internal tests instead of performing manual reviews.
Log in from a personal account to keep prompts unlinked to the company, copy the AI output verbatim, and push it straight to the corporate Git repository.
Sanitize the prompt by replacing passwords and IP addresses with placeholders, submit the snippet to the AI, then perform a peer review and standard change-control before committing the code.
Paste the entire production playbook, including credentials, into the AI chat so it can generate a fully contextual answer, and deploy the result immediately.
Answer Description
Replacing secrets and host details with placeholders prevents accidental disclosure of regulated data, while mandatory peer review ensures that any AI-generated code is checked for accuracy, security, and compliance before it reaches the repository. The other approaches expose credentials, bypass human quality assurance, violate corporate policy, or rely on a personal account that offers no organizational oversight, all of which conflict with recommended responsible-AI practices.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is an Ansible playbook, and why might it need to be refactored?
What are placeholders in the context of sanitizing data, and how are they used?
What does responsible AI mean, and why is a peer review crucial in this scenario?
You just removed several tracked files and modified others in your project, but you also created a few brand-new files that you do not want to commit yet. From the repository's root, which single Git command stages the deletions and modifications of tracked files while leaving the new untracked files out of the index?
git add .git add -Agit add -ugit add -i
Answer Description
The --update ( -u ) option tells Git to update the index only where an entry already exists. It therefore stages changes to tracked files-including deletions-without touching any brand-new, untracked paths. Using git add -u (optionally with a pathspec such as ".") achieves exactly what is required.
- git add -A or git add --all would also stage the new untracked files because it adds, modifies, and removes entries to match the working tree.
- git add . stages modifications to tracked files and adds new files found under the current directory (since Git 2.0, it also stages deletions), so the unwanted files would be included.
- git add -i enters interactive mode; it could accomplish the task, but only after extra manual steps, so it does not meet the constraint of a single non-interactive command.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the 'git add -u' command?
How does 'git add -u' differ from 'git add -A'?
What is the difference between 'git add -u' and 'git add .'?
A network administrator needs to connect several containers directly to the local LAN so that each container receives its own IP address on that subnet. The access-switch port is configured to allow only one MAC address, so the solution must avoid assigning unique hardware addresses to every container. Which container network driver meets these requirements?
bridge
macvlan
host
ipvlan
Answer Description
The correct choice is ipvlan. In both L2 and L3 modes, the ipvlan driver gives each container an IP address that is visible on the physical network, but all sub-interfaces reuse the parent interface's single MAC address. This conserves MAC entries on the switch while still allowing the containers to be addressed individually by IP.
macvlanis wrong because it assigns a distinct MAC address to each container, quickly exceeding the switch's one-MAC limit.bridgeis wrong because it keeps containers behind the Docker host's NAT, so the containers are not directly reachable on the LAN.hostis wrong because it puts the container in the host's network namespace; the container does not get its own separate IP address on the LAN.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between ipvlan L2 and L3 modes?
Why does macvlan exceed the one-MAC limit on a switch?
How does the bridge network driver differ from ipvlan?
A Linux system administrator is troubleshooting a network connectivity issue from their workstation (192.168.10.20). They are unable to get a response when they run the command ping 192.168.10.105. However, a second command, ping 192.168.10.106, to a different server on the same subnet succeeds. The administrator has already confirmed that the local firewall on their workstation is not blocking any outbound traffic. Which of the following is the MOST likely cause of the issue?
The network interface on the administrator's workstation is disabled.
An incorrect default gateway is configured on the administrator's workstation.
A firewall on the destination server (192.168.10.105) is blocking ICMP echo requests.
A DNS resolution failure is preventing the connection to the server.
Answer Description
The correct answer is that a firewall on the destination server (192.168.10.105) is blocking ICMP echo requests. Since the administrator can successfully ping another server (192.168.10.106) on the same subnet, it confirms that the local workstation's network interface, IP configuration, and the physical network path are functioning correctly. The problem is specific to the target server. A common reason for a single host to be unresponsive to pings, while others on the same network are reachable, is a host-based firewall (like firewalld or iptables) configured to drop incoming ICMP packets. An incorrect default gateway would affect communication to different networks, not hosts on the same local subnet. Since the ping is directed to an IP address, DNS is not used for resolution. The network interface on the workstation must be active to have successfully pinged the other server.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is ICMP and why is it used in network troubleshooting?
How does a host-based firewall like `firewalld` or `iptables` work?
Why doesn't an incorrect default gateway affect devices on the same subnet?
During a nightly health-check script you must test whether the internal endpoint https://service.example.local/ is alive. The check has three strict requirements:
- Do not download the response body (only a HEAD request is acceptable).
- Show only the HTTP status code on standard output so the code can be captured by the monitoring system.
- Cause the shell to return a non-zero exit status automatically when the server answers with any 4xx or 5xx error.
Which single curl command meets all of these requirements?
curl -I -s -f -w "%\n" https://service.example.local/
curl -X HEAD -s -f -o /dev/null -w "%\n" https://service.example.local/
curl -s -o /dev/null -w "%\n" -f -I https://service.example.local/
curl -s -o /dev/null -w "%\n" https://service.example.local/
Answer Description
The combination of options -I, -s, -o /dev/null, -w "%{http_code}\n", and -f delivers exactly what the script needs. -I performs a true HEAD request, so no body is transferred. -s suppresses the progress meter, and -o /dev/null discards anything that might still be written to standard output. -w prints just the value of the http_code variable, giving a clean numeric status line. Finally, -f makes curl exit with status 22 when the response code is 400 or higher, so the calling script can treat any 4xx/5xx reply as a failure.
The other commands miss at least one requirement:
- The version that omits
-Istill issues a GET and therefore transfers the response body. - The variant that leaves out
-o /dev/nulllets the response headers appear on stdout, so the output is not limited to the status code. - Using
-X HEADonly changes the request word and does not behave like a real HEAD request; it can hang if the server sets an inappropriateContent-Length. Therefore it is not reliable for scripted checks.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the -f option in the curl command?
Why is -I preferred over -X HEAD for HEAD requests in curl?
What does -w "%{http_code}\n" accomplish in the curl command?
A system administrator needs to temporarily disable password-based login for a user account named temp_user without deleting the user or their current password. The administrator wants to be able to easily re-enable login later. Which of the following commands will accomplish this?
passwd -u temp_userpasswd -d temp_userpasswd -l temp_userpasswd -e temp_user
Answer Description
The correct command is passwd -l temp_user. The -l option locks the specified user's password by prepending an exclamation mark (!) to the encrypted password hash in the /etc/shadow file. This action prevents the user from authenticating with their password. The -u option is used to unlock an account that was previously locked with -l. The -d option deletes the user's password, which is a different action from locking it. The -e option expires the user's password, forcing them to change it at the next login, but it does not lock the account.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What happens to the `/etc/shadow` file when a user's password is locked with `passwd -l`?
How is unlocking an account with `passwd -u` different from deleting a password with `passwd -d`?
What is the purpose of `passwd -e`, and how does it differ from `passwd -l`?
During a code review you spot the following Bash snippet that is supposed to keep a running counter:
#!/usr/bin/env bash
count=0
increment() {
local count=$((count + 1))
}
increment
echo "$count"
The developer expects the script to print 1, but it prints 0. Which single change will make the script print 1 while remaining portable to any POSIX-compliant /bin/sh and without starting a subshell?
Capture the function's output:
count=$(increment)instead of calling the function directly.Delete the word
localso the function assigns directly to the existingcountvariable.Add
export countbefore the function declaration to make the variable global.Replace
localwithdeclare -g -i count=$((count + 1))inside the function.
Answer Description
The keyword local limits the variable's scope to the body of the function, creating a new "shadow" copy of count that disappears when the function returns; the global count therefore remains 0. Removing the word local lets the arithmetic assignment act on the already-defined global variable, so its value becomes 1 and the final echo reflects the increment.
Replacing local with declare -g would also alter the global variable, but declare and its -g option are Bash-specific and therefore break POSIX portability. Capturing the function's output with command substitution (count=$(increment)) runs the function in a subshell, so the parent shell's variable is still unchanged. Simply exporting the variable does not influence whether the function writes to the global or a local copy, so the value would still be 0.
Thus, deleting local is the only modification that meets all stated constraints.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why does the 'local' keyword create a new variable in the Bash script?
What makes 'declare -g -i' not POSIX-compliant?
Why does using 'count=$(increment)' result in the global variable remaining unchanged?
An administrator is building a lightweight Linux workstation for developers who prefer to control their entire desktop from the keyboard. All application windows must automatically occupy non-overlapping tiles on the screen (no floating or stacking), the solution must avoid the overhead of a compositing manager, and it has to run on X11 so it will work with older graphics hardware. Which window manager best meets these requirements?
Openbox
i3
Mutter
KWin
Answer Description
i3 is a tiling window manager written for X11. It arranges windows in non-overlapping tiles, is operated almost entirely through customizable keyboard shortcuts, and does not require a separate compositing engine, making it suitable for minimal installations and older hardware.
Openbox is a lightweight stacking (floating) window manager; windows can freely overlap, so it does not satisfy the tiling requirement.
KWin and Mutter are compositing window managers used by KDE Plasma and GNOME Shell, respectively. While they can provide tiling plug-ins or scripts, their primary design includes a 3-D compositing layer that adds resource overhead the scenario specifically wants to avoid.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are the key features of a tiling window manager like i3?
What is the difference between tiling and stacking window managers?
Why does i3 not require a compositing manager, and what are the benefits?
Wow!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.