CompTIA DataX DY0-001 (V1) Practice Question

A data scientist implements a multilayer perceptron with three hidden layers, but mistakenly sets every neuron's activation function to the identity mapping f(x)=x instead of a non-linear function such as ReLU. After training, the network behaves exactly like a single-layer linear regression, regardless of how many hidden units it contains. Which explanation best describes why the network loses expressive power in this situation?

  • Identity activations implicitly impose strong L2 regularization on the weights, preventing the model from fitting non-linear patterns.

  • Identity activations force all bias terms to cancel during forward propagation, eliminating the offsets needed for non-linear decision boundaries.

  • Using identity activations makes every weight matrix symmetric and rank-deficient, restricting the network to learn only linear relationships.

  • Composing purely affine transformations (weights and bias) produces another affine transformation, so without a non-linear activation every layer collapses into one overall linear mapping of the inputs.

CompTIA DataX DY0-001 (V1)
Machine Learning
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

SAVE $64
$529.00 $465.00
Bash, the Crucial Exams Chat Bot
AI Bot