CompTIA Linux+ XK0-006 (V8) Practice Question

While using a large language model (LLM) assistant to generate Terraform/OpenTofu code for provisioning cloud infrastructure, which additional control should you implement before merging the assistant's output into the GitOps repository to most effectively follow CompTIA Linux+ AI best-practice guidance for generating infrastructure as code?

  • Archive the AI-generated Terraform in an encrypted ZIP file and apply it manually in production to protect secrets from the pipeline.

  • Raise the model's temperature and prompt it to self-verify, then push the resulting code directly to the main branch.

  • Add a policy-as-code scanner such as Checkov to the CI pipeline so the build fails if the generated Terraform violates security or compliance rules.

  • Merge the assistant's code to main immediately and depend on runtime monitoring tools to roll back any misconfigurations.

CompTIA Linux+ XK0-006 (V8)
Automation, Orchestration, and Scripting
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

Bash, the Crucial Exams Chat Bot
AI Bot