CompTIA DataX DY0-001 (V1) Practice Question

During a model‐design iteration you need to tune eight mixed hyperparameters for a multilingual transformer-based text-classification model. One full training run takes about 12 GPU-hours, but a quick look at experiments from earlier sprints shows that F1 measured after the first epoch strongly predicts the final F1. Your team has an overall budget of 48 GPU-hours for this tuning cycle and wants the single best F1 score achievable within that limit. Which hyperparameter-search strategy is the MOST appropriate for these constraints?

  • Run an exhaustive grid search that trains every hyperparameter combination to full convergence.

  • Perform a random search in which each randomly selected configuration is trained for the full 12 GPU-hours.

  • Use Hyperband with successive halving so each trial starts with one epoch and additional epochs are allocated only to the best-performing configurations.

  • Apply Gaussian-process Bayesian optimization, training each proposed configuration for the maximum number of epochs.

CompTIA DataX DY0-001 (V1)
Modeling, Analysis, and Outcomes
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

SAVE $64
$529.00 $465.00
Bash, the Crucial Exams Chat Bot
AI Bot