Interpretable Models Do Not Compromise Accuracy or Fairness in Predicting College Success

Citation:

Kung C, Yu R. Interpretable Models Do Not Compromise Accuracy or Fairness in Predicting College Success, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.

Abstract:

The presence of “big data” in higher education has led to the increasing popularity of predictive analytics for guiding various stakeholders on appropriate actions to support student success. In develop- ing such applications, model selection is a central issue. As such, this study presents a comprehensive examination of five commonly used machine learning models in student success prediction. Us- ing administrative and learning management system (LMS) data for nearly 2,000 college students at a public university, we employ the models to predict short-term and long-term academic success. Beyond the tradeoff between model interpretability and accuracy, we also focus on the fairness of these models with regard to different student populations. Our findings suggest that more interpretable mod- els such as logistic regression do not necessarily compromise predictive accuracy. Also, they lead to no more, if not less, prediction bias against disadvantaged student groups than complicated models. Moreover, prediction biases against certain groups persist even in the fairest model. These results thus recommend using simpler algorithms in conjunction with human evaluation in instructional and institutional applications of student success prediction when valid student features are in place.