Adjusted R² adds a penalty for model complexity by multiplying the unexplained variance term by (n − 1)/(n − k − 1). The model with the highest adjusted R² therefore offers the best expected generalization while avoiding unnecessary predictors. In the table, M2 has the largest adjusted R² (0.809), so the 10-predictor model strikes the best balance between fit and parsimony. M3's raw R² rises slightly, but the heavier complexity penalty pushes its adjusted R² below both other models, signaling possible overfitting. M1 is simplest but gives up explanatory power compared with M2. The small numerical spread is precisely what adjusted R² is designed to evaluate; no additional test is required to see that M2 is preferred when this metric is the decision criterion.
Ask Bash
Bash is our AI bot, trained to help you pass your exam. AI Generated Content may display inaccurate information, always double-check anything important.
What does adjusted R² represent, and how is it different from raw R²?
Open an interactive chat with Bash
Why is M3 considered overfitted despite having the highest raw R²?
Open an interactive chat with Bash
How does the formula for adjusted R² penalize model complexity?