CompTIA Linux+ Practice Test (XK0-006)
Use the form below to configure your CompTIA Linux+ Practice Test (XK0-006). The practice test can be configured to only include certain exam objectives and domains. You can choose between 5-100 questions and set a time limit.

CompTIA Linux+ XK0-006 (V8) Information
CompTIA Linux+ (v8 / XK0-006) Exam
The CompTIA Linux+ (XK0-006) certification is designed for IT professionals who work with Linux systems. It validates skills in system administration, security, scripting, and troubleshooting. This certification is vendor-neutral, covering multiple distributions such as Ubuntu, CentOS, and Red Hat.
Exam Overview
The XK0-006 exam consists of a maximum of 90 questions, including multiple-choice and performance-based questions. Candidates have 90 minutes to complete the test. The exam costs $358 USD. A passing score is 720 on a scale of 100 to 900. The certification is valid for three years and can be renewed through CompTIA’s continuing education program.
Exam Content
The XK0-006 exam focuses on five main domains: system management, security, scripting and automation, troubleshooting, and Linux fundamentals. System management includes package management, system monitoring, and user administration. Security covers permissions, authentication, and encryption. Scripting and automation focus on Bash scripting and task automation. Troubleshooting tests problem-solving skills for system failures and performance issues. Linux fundamentals include file system hierarchy, networking, and command-line operations.
Who Should Take This Exam?
The CompTIA Linux+ certification is ideal for system administrators, Linux support technicians, and DevOps professionals. It is recommended for individuals with at least one year of Linux experience. This certification is beneficial for IT professionals working with servers, cloud infrastructure, and cybersecurity.
How to Prepare
Candidates should review the official CompTIA Linux+ Exam Objectives and study materials provided by CompTIA. Hands-on experience with Linux systems is essential. Practice exams can help assess readiness and identify weak areas. Using Linux in a lab or virtual environment can provide practical experience with commands, system configuration, and troubleshooting.
Summary
The CompTIA Linux+ (XK0-006) certification is a valuable credential for IT professionals working with Linux systems. It validates essential skills in system administration, security, and automation. This certification is ideal for those managing Linux-based environments in IT infrastructure, cybersecurity, and cloud computing.
Free CompTIA Linux+ XK0-006 (V8) Practice Test
Press start when you are ready, or press Change to modify any settings for the practice test.
- Questions: 20
- Time: Unlimited
- Included Topics:System ManagementServices and User ManagementSecurityAutomation, Orchestration, and ScriptingTroubleshooting
While standardizing new local user environments on a Linux server, you prepare customized copies of .bashrc and .vimrc that should exist as real files inside every future user's home directory. You want the files to be created automatically whenever administrators run "useradd -m" without adding post-creation scripts or extra command-line options. Which action will achieve this goal?
Set a SKEL entry in /etc/login.defs to /home/company_skel and place the templates in that directory.
Add commands in /etc/profile that source the templates from a shared location whenever an interactive shell starts.
Append the file names to a COPY_FILES variable in /etc/default/useradd so that useradd copies them after it creates the account.
Copy the prepared template files into /etc/skel so useradd duplicates them into the new user's home directory during account creation.
Answer Description
The useradd command copies everything found in the skeleton directory into the new user's home when the home is created with the -m option. By default the skeleton directory is /etc/skel, as defined by the SKEL variable in /etc/default/useradd. Therefore, placing the prepared .bashrc and .vimrc inside /etc/skel guarantees that each new account receives its own copies of the files with no further steps.
Adding lines to /etc/profile only makes shells source a shared file at login; it does not place files in the user's home. A non-existent COPY_FILES variable is never read by useradd, so it has no effect. Finally, /etc/login.defs does not contain a SKEL setting, so pointing that file at another directory would not influence account creation.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the /etc/skel directory in Linux?
How does the -m option in the useradd command affect account creation?
What is the difference between /etc/skel and /etc/profile in Linux?
A system administrator needs to ensure a daily maintenance script runs on a Linux workstation. The workstation is frequently powered off outside of business hours. The administrator adds the following line to /etc/anacrontab
to manage this task:
1 10 daily-maintenance /usr/local/bin/maintenance.sh
What is the function of the number 10
in this configuration?
The job will run at 10:00 AM if the system is on.
The delay in minutes after anacron starts before the job is executed.
The number of days anacron will wait before the first execution of the job.
The priority level of the job, on a scale from 1 to 99.
Answer Description
The correct answer is that 10
represents the delay in minutes. The /etc/anacrontab
file has four fields: period
delay
job-identifier
command
. The period
is the frequency in days. The delay
is the number of minutes anacron will wait after it starts before executing the job (if the job's period has passed). This delay prevents anacron jobs from consuming system resources immediately at boot. The job-identifier
is a unique name for logging, and the command
is the script to be executed. The value is not a job priority or a specific time of day, which are common misconceptions from other scheduling tools like nice
and cron
.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the period field in /etc/anacrontab?
How does anacron differ from cron in handling scheduled tasks?
What is the significance of the job-identifier in /etc/anacrontab?
A Linux administrator is writing a Dockerfile for a custom Nginx image. The container should, by default, start Nginx in the foreground using /usr/sbin/nginx -g 'daemon off;'
. Administrators must be able to replace this entire command later simply by appending a different command to docker run
, and the chosen command must run as PID 1 rather than through /bin/sh -c
. Which single Dockerfile line meets all of these requirements?
CMD ["/usr/sbin/nginx", "-g", "daemon off;"]
CMD "/usr/sbin/nginx -g 'daemon off;'"
ENTRYPOINT ["/usr/sbin/nginx", "-g", "daemon off;"]
RUN ["/usr/sbin/nginx", "-g", "daemon off;"]
Answer Description
The exec-form CMD
array (CMD ["/usr/sbin/nginx", "-g", "daemon off;"]
) launches the given executable directly, so the process becomes PID 1 and receives signals correctly. Because it is a CMD
, any arguments supplied after the image name in docker run
completely override the default command, letting operators substitute a different program without using --entrypoint
.
The shell-form CMD
variant runs through /bin/sh -c
, so Nginx would not be PID 1 and signal handling could break. An ENTRYPOINT
in exec form makes the command harder to replace (it can only be changed with --entrypoint
, not by ordinary arguments). RUN
executes at build time, not when the container starts, so it cannot define the runtime command.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between CMD and ENTRYPOINT in a Dockerfile?
Why does the process need to run as PID 1 in a container?
What is the difference between exec-form and shell-form in a Dockerfile?
A Linux administrator is configuring a new public-facing e-commerce web server. The primary requirement is to ensure that customers' web browsers automatically trust the server's identity and that all communication is encrypted without generating security warnings. Which of the following actions should the administrator take?
Copy the server's SSH public key and configure the web server to use it for TLS.
Configure the web server to use the default 'snake oil' certificate provided by the Linux distribution.
Generate a self-signed certificate using
openssl
and install it on the web server.Obtain a TLS certificate from a publicly trusted Certificate Authority (CA).
Answer Description
The correct action is to obtain a TLS certificate from a publicly trusted Certificate Authority (CA). Web browsers and operating systems maintain a list of trusted root CAs. When a browser connects to a server, it verifies that the server's TLS certificate is signed by a CA in its trust store, establishing a valid chain of trust. This prevents security warnings and assures users of the site's authenticity.
Generating a self-signed certificate, either manually with openssl
or by using a default 'snake oil' certificate, is not appropriate for a public e-commerce site. Because these certificates are not signed by a trusted CA, browsers will display a prominent security warning, which would deter customers. SSH keys are used for securing remote administrative access (e.g., shell sessions) and are not used for securing web traffic with TLS/HTTPS; the two systems serve different purposes and are not interchangeable.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Certificate Authority (CA) and why is it important?
Why is a self-signed certificate insufficient for public websites?
How does TLS/SSL encryption work to secure communication?
A Linux administrator is troubleshooting a monitoring agent that is constantly triggering SELinux AVC messages. After confirming the access is legitimate, the administrator runs the following command to generate a custom policy module:
# cat /var/log/audit/audit.log | audit2allow -M mon_agent
This produces the files mon_agent.pp
and mon_agent.te
in the working directory. To apply the new policy module immediately and ensure it remains in effect after future reboots-without altering the system's current enforcing mode-which command should the administrator run next?
semodule -i mon_agent.pp
audit2allow -i mon_agent.te
semanage -i mon_agent.pp
setenforce 0
Answer Description
The audit2allow -M
option builds a loadable binary policy package that ends with the .pp
extension. SELinux policy packages are installed into the module store with the semodule
utility. Using semodule -i mon_agent.pp
loads the package, rebuilds the system policy, and stores the module so it is automatically loaded each time the system boots.
The other choices are incorrect:
semanage -i
is not a valid option for importing policy packages;semanage
manages SELinux settings such as ports, booleans, and file contexts, not modules.audit2allow -i
is not a valid command syntax and cannot install a module.setenforce 0
merely switches the system to permissive mode and does nothing to incorporate the new module.
Therefore, semodule -i mon_agent.pp
is the correct action.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does the `semodule` command do in SELinux?
What is the purpose of the `.pp` and `.te` files generated by `audit2allow`?
What is the difference between `setenforce` and permanently installing a policy module?
While investigating intermittent throughput problems on a Linux file-server, you run the following command during a period of heavy traffic:
# ethtool -S eno1 | grep -i drop
rx_queue_0_drops: 18342
rx_queue_1_drops: 19211
CPU utilization is low and no physical-layer errors are reported by ip -s link
. Which NIC-level adjustment is most likely to reduce these packet drops without adding new hardware?
Decrease the interface transmit queue length to 100 packets.
Increase the RX and TX ring buffer sizes with
ethtool -G
.Lower the interface MTU to 576 bytes.
Disable Generic Receive Offload (GRO) on the adapter.
Answer Description
High values in the rx_queue__drops* counters mean that the network interface's descriptor ring is overflowing before the driver can service it. Enlarging the RX (and usually TX) ring with a command such as ethtool -G eno1 rx 4096 tx 4096
gives the adapter more buffer space, preventing packets from being discarded during bursts. Lowering the MTU, disabling GRO, or shrinking the transmit queue do not address ring-buffer exhaustion and can actually decrease overall performance.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the RX and TX ring buffer in a NIC?
How does the `ethtool -G` command adjust the ring buffer size?
Why doesn’t lowering the MTU or disabling GRO solve packet drops?
A Linux administrator is launching a new e-commerce website that will process sensitive customer financial data. The highest priorities are to establish maximum customer trust and ensure universal browser compatibility without security warnings. Which type of certificate should the administrator implement to best meet these requirements?
A certificate signed by the company's internal CA.
A no-cost certificate from an automated CA (e.g., Let's Encrypt).
A self-signed certificate generated using OpenSSL.
A commercial certificate from a trusted Certificate Authority (CA).
Answer Description
The correct choice is a commercial certificate from a trusted Certificate Authority (CA). For a public-facing e-commerce site handling financial data, establishing trust is paramount. Commercial CAs offer different levels of validation, including Organization Validated (OV) and Extended Validation (EV) certificates. These require a thorough vetting of the organization, providing a higher level of assurance to visitors compared to domain-only validation. Commercial certificates are universally trusted by all browsers, preventing security warnings that would deter customers. They also often come with financial warranties, which adds another layer of security assurance for the business.
A self-signed certificate is unsuitable because it is not signed by a trusted CA, which causes browsers to display prominent security warnings, eroding customer trust. A no-cost certificate, while valid and trusted by browsers for encryption, typically only offers Domain Validation (DV). This doesn't provide the organizational vetting that is crucial for a high-trust e-commerce platform. A certificate from an internal corporate CA is only intended for internal use and would not be trusted by the public or their browsers, resulting in security errors for external visitors.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is a commercial certificate from a trusted Certificate Authority (CA) universally trusted by browsers?
What is the difference between Organization Validation (OV), Extended Validation (EV), and Domain Validation (DV) certificates?
Why are self-signed certificates or internal CA certificates unsuitable for public-facing e-commerce websites?
A Linux administrator needs to configure a network interface, eth0
, which already has the IP address 192.168.1.10/24
. They need to add a second IP address, 10.0.0.5/8
, to the same interface without interrupting existing services. Which of the following commands will accomplish this?
ip address add 10.0.0.5/8 dev eth0
ip address set 10.0.0.5/8 dev eth0
ip route add 10.0.0.5/8 dev eth0
ifconfig eth0:1 10.0.0.5 netmask 255.0.0.0
Answer Description
The correct command is ip address add 10.0.0.5/8 dev eth0
. The ip address add
command is part of the modern iproute2
suite and is used to add an additional IP address to a network interface. This does not interfere with any existing IP addresses on the interface.
ip address set
is incorrect becauseset
is not a valid action for theip address
command. The primary actions areadd
,del
, andshow
.ip route add
is incorrect because it is used to modify the system's routing table, which dictates how packets are forwarded to other networks, not to assign an IP address to a local interface.ifconfig eth0:1...
is incorrect becauseifconfig
is a legacy command from thenet-tools
package. While it was used to create IP aliases, the modern and tested standard is theip
command from theiproute2
package.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is iproute2, and why is it preferred over net-tools?
What does the `/8` mean in the IP address `10.0.0.5/8`?
Why is `ip address set` not correct in this context?
A Linux administrator is performing maintenance on a server and needs to temporarily make the logical volume named app_data
in the data_vg
volume group unavailable. The data on the volume must be preserved, and the logical volume should not be removed. Which of the following commands will accomplish this task?
lvremove data_vg/app_data
vgchange -an data_vg
lvresize -L 50G data_vg/app_data
lvchange -an data_vg/app_data
Answer Description
The correct command is lvchange -an data_vg/app_data
. The lvchange
command is used to alter the attributes of a logical volume. The -a
flag sets the activation state, and n
(for 'no') deactivates the volume, making it unavailable to the system without deleting it. The lvremove
command would permanently delete the logical volume and its data. The vgchange
command operates on the entire volume group, not a specific logical volume. The lvresize
command is used to change the size of a logical volume, not its activation state.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the `lvchange` command?
What is the difference between `lvchange` and `vgchange`?
What is the difference between deactivating and removing a logical volume?
You have been asked to create a persistent audit rule that logs every change (write or attribute modification) to the /etc/shadow file, while ignoring normal read access. The rule must reside in /etc/audit/rules.d/50-shadow.rules and every resulting record should be labeled with the key identity_change. Which single line fulfils these requirements using standard auditctl / audit.rules syntax?
-w /etc/shadow -p wa -k identity_change
-w /etc/shadow -p ra -k identity_change
-w /etc/shadow -p rwx -k identity_change
-a always,exit -F path=/etc/shadow -F perm=rw -k identity_change
Answer Description
In an audit watch rule, the -p flag specifies which file operations should trigger logging:
- w - write access (data changes)
- a - attribute changes (permission, ownership, timestamp, etc.)
- r - read access
- x - execute access
To record both data changes and metadata changes while excluding simple reads, the rule needs w and a but must not include r. The correct syntax therefore combines them as -p wa. Adding -k identity_change tags the resulting records, and -w /etc/shadow sets the watch on the required file.
The other options are wrong because they either log read access (include r), omit attribute changes (omit a), or use a system-call style rule that still lacks the a permission. Hence only the rule with -p wa meets all stated conditions.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the purpose of the /etc/shadow file in Linux?
What does the '-p wa' option in an auditctl rule do?
How do audit keys (e.g., -k identity_change) help in audit logs?
A systems administrator is reviewing a developer's workflow. The developer frequently pastes large code snippets, which include proprietary logic and service account credentials, into a public, cloud-hosted AI chat service for debugging assistance. Which of the following is the most significant data governance risk associated with this practice?
Using a public AI service will violate the corporate policy on 'Shift-left testing'.
The sensitive data could be incorporated into the AI's training data or be exposed through human review, leading to a potential data breach.
The AI model might generate inefficient or non-compliant code, increasing technical debt.
The developer could become overly reliant on the AI, leading to a degradation of their own debugging skills.
Answer Description
The correct answer identifies the primary data governance risk. Pasting sensitive information like proprietary code and credentials into a public AI service exposes that data to the service provider. Many public AI services state in their terms of use that they may use customer data to train their models, and they may also employ human reviewers who can see the data. This creates a significant risk of that confidential data being absorbed into the model and potentially exposed to other users, or being otherwise compromised, leading to a serious data breach. The other options describe valid but less significant or different types of risks. Generating inefficient code is a code quality risk, not a data governance risk. Violating 'shift-left testing' is a process issue. A developer's skill degradation is a personnel risk, not a data governance risk.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does it mean for sensitive data to be incorporated into an AI's training data?
How can human review of data in AI services lead to a data breach?
What policies can organizations implement to prevent the risks associated with using public AI services?
A Linux administrator is tasked with updating a series of complex, legacy shell scripts that have no existing documentation. To accelerate the process, the administrator uses a large language model (LLM) to generate comments and a summary for each script. According to responsible AI best practices, what is the most appropriate next step for the administrator to take?
Trust the AI's output as sufficient and archive the generated documentation in a shared folder without adding it to the scripts.
Thoroughly review the AI-generated documentation for technical accuracy and contextual correctness, then manually integrate it into the scripts or a separate documentation file.
Copy and paste the generated documentation directly into the scripts to save time and immediately commit the changes to the central repository.
Discard the generated documentation and use the AI to refactor the entire script into a more modern, self-documenting language like Python.
Answer Description
The correct answer is to thoroughly review the AI-generated output for accuracy and context before integrating it. AI tools can generate plausible-sounding but incorrect or incomplete documentation. Best practices for responsible AI usage mandate that a human expert must verify the output, especially for technical and business-logic-specific details, to ensure it is accurate and relevant. Simply copying the output without review is irresponsible and can lead to propagating errors. While AI can be used for refactoring or generating tests, those are different tasks and do not fulfill the primary requirement of documenting the existing scripts. Generating documentation and then immediately archiving it without integrating it into the codebase or a version control system provides no value to future developers.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What are responsible AI best practices in technical tasks?
Why is it necessary to review AI-generated documentation before use?
How does integrating documentation into scripts or a version control system add value?
A system administrator is analyzing the process list on a Linux server using the command ps -ef
. The output includes several columns of information for each process. Which column in the output represents the Parent Process Identification Number (PPID)?
UID
C
PID
PPID
Answer Description
The correct answer is the column labeled PPID
. In the output of the ps -ef
command, the PPID
column explicitly lists the Parent Process Identification Number for each process. This number is the process ID (PID) of the process that created the current process. The PID
column shows the process's own unique ID. The UID
column shows the User ID of the process owner. C
(or %CPU
) represents the percentage of CPU utilization.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is a Parent Process Identification Number (PPID)?
How can I find the PPID of a specific process?
Why is the PPID important in process management?
During a security review, a Linux DevOps team discovers that a VS Code plug-in sends their Kubernetes manifests to a public large language model (LLM) for advice on hardening. Some of the manifests still contain base64-encoded Secrets that hold private container-registry credentials. The team wants to keep using the plug-in but must prevent accidental credential exposure. Which action best mitigates this specific risk?
Require the plug-in to use TLS with certificate pinning when calling the LLM's API endpoint.
Add an automated pre-submission filter that masks or removes any values matching credential or secret patterns before the manifest is sent to the LLM.
Insert a comment in each manifest instructing the LLM not to reveal or retain embedded secrets.
Allow the plug-in only on a non-production Git branch that mirrors the manifests.
Answer Description
Sanitizing or redacting secret material before it is transmitted ensures the LLM never receives the data, eliminating the possibility that the model could store, leak, or later regenerate the credentials. Transport encryption (TLS) only protects data in transit, not after it reaches the provider. Embedding "do not disclose" comments relies on the model's voluntary compliance and does not stop the data from entering its context window. Restricting the plug-in to a non-production branch still exposes any secrets that appear in those files.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is base64 encoding and how does it relate to secrets in Kubernetes manifests?
How does an automated pre-submission filter work to protect sensitive data?
Why is relying solely on TLS or 'do not disclose' comments insufficient in this scenario?
A RHEL 9 server has been joined to the corp.example.com Active Directory realm with realmd, and domain users are authenticating through SSSD. A new policy states that only members of the AD security group LinuxAdmins may obtain an interactive login on this host. The control must be implemented in SSSD (not in PAM or sshd).
Which modification to /etc/sssd/sssd.conf will enforce the requirement after the file is saved and SSSD is restarted?
Under [domain/corp.example.com] add: access_provider = ad ad_access_filter = (memberOf=CN=LinuxAdmins,OU=Groups,DC=corp,DC=example,DC=com)
Add simple_allow_groups = LinuxAdmins in the domain stanza and leave the existing access_provider unchanged
Set enumeration = true in the [sssd] section so SSSD can list the LinuxAdmins group
Disable credential caching by setting cache_credentials = false in the [sssd] section
Answer Description
SSSD can perform access control itself. When the access provider for a domain is set to ad, adding an ad_access_filter limits logins to entries that match the supplied LDAP filter. Inserting the two lines shown in the correct option restricts access to users whose memberOf attribute contains the LinuxAdmins group DN.
Enabling enumeration (option 2) only causes SSSD to list all users and groups; it does not control logins. Adding simple_allow_groups without switching the provider to simple (option 3) has no effect, because the simple access lists are ignored when another provider is active. Disabling credential caching (option 4) merely forces online authentication and does not restrict which identities can log in.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the role of realmd in joining a RHEL server to an AD realm?
What is SSSD and why is it used in this setup?
How does the ad_access_filter in SSSD work with LDAP?
A network administrator is using iperf3
to troubleshoot network throughput between two Linux servers, ServerA
(10.0.1.10) and ServerB
(10.0.1.20). The administrator starts the iperf3
server process on ServerA
. To measure the download speed from ServerA
to ServerB
, which command should be executed on ServerB
?
iperf3 -c 10.0.1.10 -R
iperf3 -c 10.0.1.10
iperf3 -c 10.0.1.10 --get-server-output
iperf3 -s -c 10.0.1.10
Answer Description
The correct command is iperf3 -c 10.0.1.10 -R
. By default, an iperf3
test sends data from the client to the server, which measures the upload speed of the client. The -R
(or --reverse
) flag reverses the direction of the test, causing the server (ServerA
) to send data to the client (ServerB
). This correctly measures the download speed on the client machine. The command iperf3 -c 10.0.1.10
would measure the upload speed from ServerB
to ServerA
. The command iperf3 -s -c 10.0.1.10
is invalid because the -s
(server) and -c
(client) flags are mutually exclusive. The --get-server-output
flag is used to retrieve the server's final report at the client, but it does not alter the direction of the data transfer.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does the `-R` flag do in `iperf3`?
Why can't the `-s` and `-c` flags be used together in `iperf3`?
What is the purpose of the `--get-server-output` flag in `iperf3`?
You are preparing to release a Linux command-line utility as part of an enterprise distribution. Management wants a license that will force anyone who ships a modified build-or an executable that is statically or dynamically linked with your utility-to (1) provide the full, preferred-form source code to recipients and (2) distribute every derivative or linked work under the same license terms, so the "share-and-share-alike" rule continues downstream. Which class of license will meet these requirements?
A weak copyleft license, such as the GNU Lesser General Public License (LGPL) v2.1
A permissive license, such as the MIT or BSD 2-Clause license
A strong copyleft license, for example the GNU General Public License (GPL) v3
A dual-license arrangement that lets distributors choose proprietary terms
Answer Description
A strong copyleft license such as the GNU GPL satisfies both conditions. The GPL is expressly designed to propagate its terms to all derivative works (including those created through static or dynamic linking), and it requires that any party who distributes binaries also provide the complete corresponding source code. Weak copyleft licenses like the LGPL only compel publication of changes to the library itself and allow proprietary programs to link without relicensing. Permissive licenses (MIT, BSD, Apache) impose minimal conditions and let downstream redistributors close their code. A dual-licensing model that offers proprietary terms would allow recipients to circumvent the copyleft altogether, defeating the stated goals.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is the difference between a strong copyleft and a weak copyleft license?
What are the key requirements of the GNU General Public License (GPL) v3?
Why do permissive licenses like the MIT or BSD not meet the requirements of strong copyleft?
A systems administrator is hardening a new Linux server according to security best practices. The security policy requires that the root user cannot log in directly using a password but must be able to log in using SSH key-based authentication for emergency maintenance. Which configuration should the administrator set in the /etc/ssh/sshd_config
file to meet this requirement?
PermitRootLogin no
PermitRootLogin prohibit-password
PermitRootLogin yes
PermitRootLogin forced-commands-only
Answer Description
The correct setting is PermitRootLogin prohibit-password
. This directive in the sshd_config
file specifically disables password and keyboard-interactive authentication for the root user while still allowing other authentication methods like public key authentication.
PermitRootLogin no
would be incorrect as it completely disables root login via SSH, which violates the requirement to allow key-based login for emergencies.PermitRootLogin yes
is incorrect because it would allow root login using any method, including passwords, which is against the stated security policy.PermitRootLogin forced-commands-only
is also incorrect in this scenario. While it does use key-based authentication, it restricts the root user to only executing specific commands defined in theauthorized_keys
file and does not grant an interactive shell, which would likely be needed for 'emergency maintenance'.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is SSH key-based authentication preferred over password authentication?
What other changes should be made in `sshd_config` to improve security further?
What is the purpose of the `authorized_keys` file in SSH?
You are writing a portable Bourne-compatible shell script that must compress any log file in /var/log that exceeds 100 MiB (104 857 600 bytes). The loop must:
- Confirm that each entry is a regular file before acting.
- Compare the file's size against the byte threshold using POSIX-defined numeric operators.
Which snippet meets both requirements while remaining compatible with a standard /bin/sh on Linux?
for f in /var/log/*; do [[ -d "$f" ]] && [[ $(du -m "$f" | cut -f1) -gt 100 ]] && gzip "$f" done
for f in /var/log/*; do if (( $(stat -c%s "$f") > 104857600 )); then gzip "$f"; fi done
for f in /var/log/*; do [ -f "$f" ] && [ $(stat -c%s "$f") -gt 104857600 ] && gzip "$f" done
for f in /var/log/*; do test -f "$f" && test $(stat -c%s "$f") > 104857600 && gzip "$f" done
Answer Description
The first snippet chains two separate POSIX‐compliant tests with the && operator:
[ -f "$f" ]
verifies that the pathname refers to a regular file.$(stat -c%s "$f") -gt 104857600
obtains the size in bytes with stat and performs a numeric greater-than comparison using-gt
.
If both tests succeed, gzip runs; otherwise the next file is examined. All commands (for
, test
/[ ], stat
, gzip
) are available on a typical Linux system and require no Bash-specific extensions.
The second snippet checks for directories, not regular files, and therefore skips every log file.
The third snippet attempts to use the redirection operator >
inside test, which redirects output instead of performing a numeric comparison.
The fourth snippet relies on the arithmetic compound command (( ))
, which is a Bash extension and not guaranteed in /bin/sh.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is "/bin/sh" compatibility significant for shell scripts?
What does `[ -f "$f" ]` check in the script?
Why is `stat -c%s` used to get the file size?
A Linux systems administrator is investigating a performance degradation issue on a critical application server. The administrator suspects that the issue might be caused by an excessive number of malformed packets being sent to the server from a specific client machine. To confirm this suspicion, the administrator needs to capture and examine the full contents of the network packets exchanged between the server and the client. Which of the following tools is BEST suited for this task?
tcpdump
netstat
nmap
OpenSCAP
Answer Description
The correct answer is tcpdump
. The tcpdump
command is a protocol analyzer used to capture and display the contents of packets on a network interface. This allows an administrator to perform deep packet inspection to analyze traffic, such as identifying malformed packets. nmap
is a network scanner used for host discovery and port scanning, but it does not capture the full content of packets for analysis. netstat
is used to display network connections, routing tables, and interface statistics, but it does not capture packet data. OpenSCAP
is a tool for auditing system compliance against security policies and is not used for real-time packet analysis.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What is `tcpdump`, and how does it work?
How does `tcpdump` differ from `nmap`?
What is a ‘malformed packet,’ and why is it significant?
That's It!
Looks like that's it! You can go back and review your answers or click the button below to grade your test.