Computational tools are poised to play an increasingly large role in our society across different domains, including public health and conservation. As a consequence, computational tools need to be designed to ensure equitable benefits for everyone.
To that end, we need to bring in a diverse set of perspectives that spans from algorithmic fairness, human-centered computing, and sustained deployment. This seminar series will explore how artificial intelligence can equitably solve social problems. For example, what role can AI play in promoting health, access to opportunity, and sustainable development? How can human-centered computing methods be deployed to ensure AI systems are ethical, inclusive, and accountable?
- Arpita Biswas (email@example.com, sites.google.com/view/arpitabiswas)
- Herman Saksono (firstname.lastname@example.org, hermansaksono.com)
Title: Machine learning for healthcare: Steps Towards a personalised approach.
Abstract: Machine learning advances are opening new routes to more precise healthcare, from the discovery of disease subtypes for stratified interventions to the development of personalised interactions supporting self-care between clinic visits. This offers an exciting opportunity for machine learning techniques to impact healthcare in a meaningful way. Within the healthcare domain, machine learning for mental healthcare is an under-investigated area and yet a potentially highly impactful area of research. In this talk, I will present recent work on machine learning to enable a more personalised approach to mental healthcare, whereby information can be aggregated from multiple sources within a unified modelling framework. I will present applications from both mental health and respiratory diseases.
Title: Computational Insights on the Meaning of Individual Probabilities
Abstract: Prediction algorithms assign numbers to individuals that are popularly understood as individual “probabilities”—what is the probability of 5-year survival after cancer diagnosis?—and which increasingly form the basis for life-altering decisions. The philosophical and practical understanding of individual probabilities in the context of events that are non-repeatable has been the focus of intense study for decades by the statistics community. The wide-scale impact of automatic decision making calls for revisiting these questions from a computational perspective.
In this vein and building off of notions developed in complexity theory and cryptography, we introduce and study Outcome Indistinguishability. Predictors that are Outcome Indistinguishable yield a model of probabilities that cannot be efficiently refuted on the basis of the real-life observations produced by Nature.
The talk will be self-contained and will explain the relevant complexity-theoretic and algorithmic-fairness literatures in which this work is grounded. Our focus will be on the insights that can be drawn from this work such as providing scientific grounds for the political argument that, when inspecting algorithmic risk prediction instruments, auditors should be granted oracle access to the algorithm, not simply historical predictions.
Based on research joint with Cynthia Dwork, Michael Kim, Guy Rothblum and Gal Yona
Feb 8: Heather Lynch, Ph.D. (Stony Brook University)
Feb 22: Munmun De Choudhury, Ph.D. (Georgia Institute of Technology)
Mar 8: Courtney Cogburn, Ph.D. (Columbia University)
Mar 15: Lauren Wilcox, Ph.D (Georgia Institute of Technology, Wellbeing Lab at Google)
Mar 22: Tiffany Veinot, MLS, Ph.D. (University of Michigan)
Mar 29: Kush Varshney, Ph.D (Thomas J. Watson Research Center, IBM)
Apr 5: Christopher Le Dantec, Ph.D. (Georgia Institute of Technology)
Apr 12: Nyalleng Moorosi (Google AI)
Apr 19: Michael J. Mina, MD, PhD (Harvard T.H. Chan School of Public Health, Harvard Medical School)
Danielle Belgrave (seminar on January 25, 2021)
Principal Research Manager, Microsoft Research Cambridge (UK)
Danielle Belgrave is a machine learning researcher in the Healthcare Intelligence group at Microsoft Research, in Cambridge (UK) where she works on Project Talia. Her research focuses on integrating medical domain knowledge, probabilistic graphical modelling and causal modelling frameworks to help develop personalized treatment and intervention strategies for mental health. Mental health presents one of the most challenging and under-investigated domains of machine learning research. In Project Talia, she and her team explore how a human-centric approach to machine learning can meaningfully assist in the detection, diagnosis, monitoring, and treatment of mental health problems. She obtained a BSc in Mathematics and Statistics from London School of Economics, an MSc in Statistics from University College London and a PhD in the area of machine learning in health applications from the University of Manchester. Prior to joining Microsoft, she was a tenured Research Fellow at Imperial College London.
Omer Reingold (seminar on February 1, 2021)
Professor of computer science at Stanford University and the director of the Simons Collaboration on the Theory of Algorithmic Fairness
Omer Reingold is the Rajeev Motwani professor of computer science at Stanford University and the director of the Simons Collaboration on the Theory of Algorithmic Fairness. Past positions include Samsung Research America, the Weizmann Institute of Science, Microsoft Research, the Institute for Advanced Study in Princeton, NJ and AT&T Labs. His research is in the foundations of computer science and most notably in computational complexity, cryptography and the societal impact of computation. He is an ACM Fellow and a Simons Investigator. Among his distinctions are the 2005 Grace Murray Hopper Award and the 2009 Gödel Prize.