CompTIA DataX DY0-001 (V1) Practice Question

During an ablation study you train two otherwise identical multilayer perceptrons on the same data set:

  • Network A uses the logistic sigmoid (σ) activation in every hidden layer.
  • Network B uses the hyperbolic tangent (tanh) activation in every hidden layer. With the same optimizer, learning-rate schedule, batch size, and weight initialization, Network B reaches the target validation loss in roughly half the epochs required by Network A.

Which intrinsic property of the logistic sigmoid most plausibly explains the slower convergence of Network A?

  • Its derivative equals one at zero, leading to gradient magnitudes that explode during early training.

  • It is not differentiable for negative inputs, so back-propagation cannot adjust weights efficiently.

  • Output values are strictly positive, so updates are not zero-centered and cause gradient descent to zig-zag, slowing learning.

  • The exponential operations in its formula require more floating-point instructions, and this computational overhead dominates training time.

CompTIA DataX DY0-001 (V1)
Machine Learning
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

SAVE $64
$529.00 $465.00
Bash, the Crucial Exams Chat Bot
AI Bot