CompTIA DataX DY0-001 (V1) Practice Question

A data‐science team is replacing a scikit‐learn GradientBoostingClassifier with XGBoost because the original model suffers from severe overfitting on a wide tabular data set. Their primary goal is to tightly control model complexity by directly penalizing large leaf weights (and optionally excessive numbers of leaves) during training. Which built-in XGBoost feature should they rely on to meet this goal?

  • L1 and L2 regularization terms (reg_alpha / reg_lambda) built into the objective function

  • Use of second-order (Hessian) information when computing the best split

  • Training trees with the histogram-based ("hist") tree_method for faster node expansion

  • Automatic treatment of missing values when selecting split directions

CompTIA DataX DY0-001 (V1)
Machine Learning
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

SAVE $64
$529.00 $465.00
Bash, the Crucial Exams Chat Bot
AI Bot