Fairness, in machine learning research, is often conceived as an exercise in constrained optimization, based on a predefined fairness metric. We argue that this abstract model of algorithmic fairness is a poor match for the real-world, in which applications are likely to be embedded within a larger context involving multiple classes of stakeholders as well as multiple social and technical systems. We may expect multiple, competing claims around fairness coming from various stakeholders, especially in applications oriented towards social good. We propose that computational social choice is a promising framework for the integration of multiple perspectives on system outcomes in fairness- aware systems and provide an example case of personalized recommendation for a non-profit.
Back to AI for Social Good event