An automated trading-surveillance system must ensure that at least 70 % of the orders it flags as suspicious are truly manipulative, so the compliance team has set a minimum precision of 0.70. Two candidate classifiers were evaluated on a validation set of 50 000 historical orders with the following confusion-matrix counts:
Because only Classifier X achieves a precision of at least 0.70, it alone satisfies the compliance requirement. Classifier Y falls short despite having more true positives, because it also generates more false positives, lowering its precision.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
How do you calculate precision in a confusion matrix?
Open an interactive chat with Bash
What is the role of the confusion matrix in evaluating classifiers?
Open an interactive chat with Bash
Why does Classifier Y have more true positives but a lower precision than Classifier X?