CompTIA DataX DY0-001 (V1) Practice Question

An enterprise team must adapt a 13-billion-parameter transformer-based large language model to its proprietary support-ticket corpus. Requirements are:

  1. Keep all original model weights frozen for compliance review.
  2. Add and train no more than about 1 % extra parameters to minimize GPU memory during training.
  3. Once fine-tuning is complete, incur zero additional inference latency because any extra parameters will be merged into the base weights.

Which parameter-efficient adaptation technique best satisfies all three of these constraints by inserting trainable low-rank matrices into each transformer layer during fine-tuning?

  • Knowledge distillation into a smaller student model

  • Prefix tuning with virtual key/value vectors

  • Low-Rank Adaptation (LoRA)

  • Dynamic token pruning during inference

CompTIA DataX DY0-001 (V1)
Specialized Applications of Data Science
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

SAVE $64
$529.00 $465.00
Bash, the Crucial Exams Chat Bot
AI Bot