A data science team is developing a real-time fraud detection model for financial transactions. The deployment specifications are strict: inference latency must not exceed 100ms to ensure a seamless user experience, and the model must achieve a recall of at least 0.92 to minimize the number of missed fraudulent transactions. After experimenting with several architectures, the team has narrowed the choice down to three models and has compiled the following specification testing results:
Model
Recall
F1-Score
Average Inference Latency (ms)
Model A (DNN)
0.95
0.91
145 ms
Model B (GBM)
0.93
0.92
85 ms
Model C (LogReg)
0.88
0.89
20 ms
Based on an analysis of these specification testing results, which model should be recommended for deployment?
Model A (DNN), because it has the highest recall, which is the most critical metric for minimizing missed fraud.
Model B (GBM), because it is the only model that satisfies both the minimum recall and maximum latency requirements.
Model C (LogReg), because its extremely low latency provides the best user experience while maintaining a high F1-Score.
None of the models are suitable, as no single model optimizes both recall and latency simultaneously.
The correct answer is Model B (GBM). The business requirements mandate a recall of at least 0.92 and an inference latency below 100ms. Model B is the only model that satisfies both of these critical specifications, with a recall of 0.93 and a latency of 85ms. Model A has a higher recall (0.95) but fails to meet the latency requirement (145ms). Model C has excellent latency (20ms) but does not meet the minimum recall requirement (0.88). Therefore, Model B presents the best trade-off that aligns with the project's defined constraints.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does recall mean in the context of fraud detection?
Open an interactive chat with Bash
What is inference latency, and why is it important for this model?
Open an interactive chat with Bash
Why is Model C not recommended despite its low latency?