CompTIA DataX DY0-001 (V1) Practice Question

A machine learning engineer is training a large-scale deep neural network. During training, they observe that the loss function is decreasing very slowly and exhibits significant oscillations. This behavior suggests the optimization process is struggling with a complex loss landscape containing numerous saddle points and ravines. The engineer has already tuned the learning rate, but the problem persists. To improve training stability and accelerate convergence, the engineer needs to select a more suitable optimizer.

Given this scenario, which optimizer would be the most effective choice to simultaneously address both the slow convergence and the high variance in the loss updates?

  • Stochastic Gradient Descent (SGD) with Momentum

  • Adam optimizer

  • Mini-batch Gradient Descent

  • Root Mean Square Propagation (RMSprop)

CompTIA DataX DY0-001 (V1)
Machine Learning
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

SAVE $64
$529.00 $465.00
Bash, the Crucial Exams Chat Bot
AI Bot