CompTIA DataX DY0-001 (V1) Practice Question

A data-science team implements a single-layer perceptron whose decision function is sign(w路x) (no bias term). On a linearly separable dataset, the model converges to weight values that still misclassify several points because the separating hyperplane must cross the origin. The engineers consider adding an explicit bias neuron so that the decision function becomes sign(w路x + b). Why does introducing this bias term usually allow the perceptron to find a weight vector that perfectly separates the same dataset without changing the learning rule?

  • It converts the classifier from a homogeneous to an affine hyperplane, allowing the decision boundary to shift away from the origin while keeping its orientation.

  • It introduces a non-linear interaction that enables the perceptron to model non-linearly separable patterns.

  • It constrains weight growth during gradient descent, thereby preventing over-fitting.

  • It reduces the dimensionality of the input space, making the weight search easier.

CompTIA DataX DY0-001 (V1)
Machine Learning
Your Score:
Settings & Objectives
Random Mixed
Questions are selected randomly from all chosen topics, with a preference for those you haven’t seen before. You may see several questions from the same objective or domain in a row.
Rotate by Objective
Questions cycle through each objective or domain in turn, helping you avoid long streaks of questions from the same area. You may see some repeat questions, but the distribution will be more balanced across topics.

Check or uncheck an objective to set which questions you will receive.

SAVE $64
$529.00 $465.00
Bash, the Crucial Exams Chat Bot
AI Bot