You are building a 30-day hospital readmission classifier. Regulators require that every prediction can be traced back to global curves fáµ¢(xáµ¢) that show how each individual predictor (age, creatinine, etc.) contributes to the log-odds of readmission; these curves must be viewable by clinicians for any patient. The data-science team also wants the model to capture nonlinear effects (for example, risk increasing until about age 75 and then leveling off) without using post-hoc surrogate explainers or manual feature engineering. Which modelling approach best satisfies these constraints while keeping the model intrinsically interpretable?
Deep feed-forward neural network with ReLU activations and dropout
Support vector machine using an RBF (Gaussian) kernel
Generalized additive model (for example, an Explainable Boosting Machine)
Random forest ensemble with 500 unpruned decision trees
Generalized additive models (GAMs) express the transformed target (here, the log-odds after a logit link) as β₀ + Σfᵢ(xᵢ), where each fᵢ is a learned, smooth, one-dimensional function of a single feature. Because the contribution of every predictor is isolated and additive, clinicians can inspect or plot fᵢ(xᵢ) to see its global effect-exactly the transparency regulators demand. Modern implementations such as Explainable Boosting Machines train these shape functions automatically, allowing rich non-linear patterns without hand-crafted splines.
A deep feed-forward neural network with ReLU activations and dropout is highly expressive but its internal weights and activations do not yield simple, global feature curves, so it remains a black box that needs post-hoc explanations.
A random forest of hundreds of unpruned trees can provide variable-importance scores, yet the ensemble decision logic is distributed across many heterogeneous paths, sacrificing the interpretability of a single small tree.
An RBF-kernel support-vector machine calculates decisions in an implicit, infinite-dimensional feature space; coefficients for individual input variables are not available, so only local surrogate methods can approximate its reasoning.
Therefore, a generalized additive model is the only option that is both intrinsically interpretable at a global level and able to learn nonlinear, per-feature relationships.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
Why are Generalized Additive Models (GAMs) considered intrinsically interpretable?
Open an interactive chat with Bash
What makes Explainable Boosting Machines (EBMs) a modern implementation of GAMs?
Open an interactive chat with Bash
Why aren't deep neural networks or random forests suitable for the interpretability required in this case?