Use training data that includes a wide range of demographic groups - This is the correct answer. Ensuring that the training data is diverse and includes a wide range of demographic groups helps reduce biases and ensures that the automated loan approval system treats all applicants fairly. This approach ensures that the model learns from a broad spectrum of data, which can help minimize unfairness in the system.
Exclude any features related to personal characteristics from the data - While this may seem like a good way to prevent bias, excluding personal characteristics (such as age, gender, or race) entirely could lead to a model that lacks important context for making fair and informed decisions.
Increase the complexity of the algorithm to improve accuracy - Increasing algorithm complexity might improve accuracy, but it does not directly address fairness.
Expand the dataset by collecting more data of the same type - Expanding the dataset with more of the same type of data might not necessarily improve fairness if the data itself is biased or unrepresentative.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why is it important to include a wide range of demographic groups in the training data?
Open an interactive chat with Bash
What are some potential biases that could arise if demographic diversity is lacking in training data?
Open an interactive chat with Bash
What steps can organizations take to ensure their training data is diverse and representative?
Open an interactive chat with Bash
Microsoft Azure AI Fundamentals AI-900
Describe Artificial Intelligence Workloads and Considerations
Your Score:
Report Issue
Bash, the Crucial Exams Chat Bot
AI Bot
Loading...
Loading...
Loading...
IT & Cybersecurity Package Join Premium for Full Access