Computational tools are poised to play an increasingly large role in our society across different domains, including public health and conservation. As a consequence, computational tools need to be designed to ensure equitable benefits for everyone.
To that end, we need to bring in a diverse set of perspectives that spans from algorithmic fairness, human-centered computing, and sustained deployment. This seminar series will explore how artificial intelligence can equitably solve social problems. For example, what role can AI play in promoting health, access to opportunity, and sustainable development? How can human-centered computing methods be deployed to ensure AI systems are ethical, inclusive, and accountable?
- Arpita Biswas (firstname.lastname@example.org, sites.google.com/view/arpitabiswas)
- Herman Saksono (email@example.com, hermansaksono.com)
Professor of computer science at Stanford University and the director of the Simons Collaboration on the Theory of Algorithmic Fairness
Abstract: Prediction algorithms assign numbers to individuals that are popularly understood as individual “probabilities”—what is the probability of 5-year survival after cancer diagnosis?—and which increasingly form the basis for life-altering decisions. The philosophical and practical understanding of individual probabilities in the context of events that are non-repeatable has been the focus of intense study for decades by the statistics community. The wide-scale impact of automatic decision making calls for revisiting these questions from a computational perspective.
In this vein and building off of notions developed in complexity theory and cryptography, we introduce and study Outcome Indistinguishability. Predictors that are Outcome Indistinguishable yield a model of probabilities that cannot be efficiently refuted on the basis of the real-life observations produced by Nature.
The talk will be self-contained and will explain the relevant complexity-theoretic and algorithmic-fairness literatures in which this work is grounded. Our focus will be on the insights that can be drawn from this work such as providing scientific grounds for the political argument that, when inspecting algorithmic risk prediction instruments, auditors should be granted oracle access to the algorithm, not simply historical predictions.
Based on research joint with Cynthia Dwork, Michael Kim, Guy Rothblum and Gal Yona
Omer Reingold bio
Omer Reingold is the Rajeev Motwani professor of computer science at Stanford University and the director of the Simons Collaboration on the Theory of Algorithmic Fairness. Past positions include Samsung Research America, the Weizmann Institute of Science, Microsoft Research, the Institute for Advanced Study in Princeton, NJ and AT&T Labs. His research is in the foundations of computer science and most notably in computational complexity, cryptography and the societal impact of computation. He is an ACM Fellow and a Simons Investigator. Among his distinctions are the 2005 Grace Murray Hopper Award and the 2009 Gödel Prize.