IJCAI 2020 AI for Social Good workshop

Date: 

Thursday, January 7, 2021 (All day) to Friday, January 8, 2021 (All day)

Location: 

Yellow Wing—North 4

Join us at IJCAI on January 7 and 8 (Japan time) for the AI for Social Good workshop. This event will explore how artificial intelligence can contribute to solving social problems and will bring together researchers and practitioners across artificial intelligence and a range of application domains.

Session 1: 7 pm to 12 am EST, January 7th. Session 2: 9 am to 1 pm UTC, January 8th.

Registration | Program | Invited speakers

Call for Papers | Organizing Committee | Travel Awards

Sign up for the mailing list for the workshop here.

Artificial intelligence is poised to play an increasingly large role in societies across the world. Accordingly, there is a growing interest in ensuring that AI is used in a responsible and beneficial manner. A range of perspectives and contributions are needed, spanning the full spectrum from fundamental research to sustained deployments.

This workshop will explore how artificial intelligence can contribute to solving social problems. For example, what role can AI play in promoting health, access to opportunity, and sustainable development? How can AI initiatives be deployed in an ethical, inclusive, and accountable manner? To address such questions, this workshop will bring together researchers and practitioners across artificial intelligence and a range of application domains. The objective is to share the current state of research and practice, explore directions for future work, and create opportunities for collaboration.

Such questions are particularly salient in light of the COVID-19 pandemic. AI has an important role to play in providing insight into the course of the epidemic and developing targeted responses; we encourage submissions from both AI researchers as well as epidemiologists, health policy researchers, and other domain experts who are interested in engaging with the AI community.

Program

January 7 session (EST time)

7:00pm–7:10pm EST: Opening remarks

7:10pm–8:10pm EST: Phebe Vayanos: Designing Robust, Interpretable, and Fair Social and Public Health Interventions

In the last decades, significant advances have been made in AI, ML, and optimization. Recently, systems relying on these technologies are being transitioned to the field with the potential of having tremendous influences on people and society. With increase in the scale and diversity of deployment of algorithm-driven decisions in the open world come several challenges including the need for robustness, interpretability, and fairness which are confounded by issues of data scarcity and bias, tractability, ethical considerations, and problems of shared responsibility between humans and algorithms. In this talk, we focus on the problems of homelessness and public health in low resource and vulnerable communities and present research advances in AI, ML, and optimization to address one key cross-cutting question: how to allocate scarce intervention resources in these domains while accounting for the challenges of open world deployment? We will show concrete improvements over the state of the art in these domains based on real world data and on recent deployments. We are convinced that, by pushing this line of research, AI, ML, and optimization can play a crucial role to help fight injustice and solve complex problems facing our society.

8:10pm–9:10pm EST: Long talks

9:10pm–9:20pm EST: Short talks

9:20pm–9:30pm EST: Break

9:30pm–10:30pm EST: Long talks

10:30pm–11:00pm EST: Short talks

11:00pm–12:00am EST: Poster session: Map

January 8 session (UTC time)

9:00am–10:00am UTC: Teemu Roos: Experiences from the Elements of AI: an EU-wide AI Literacy Campaign

Have you ever ran into misconceptions about AI and its limitations? Have these misconceptions ever become a bottleneck in developing AI solutions for social good? Have you ever thought how much more we could do, together with the domain experts, if people didn't think of AI as a mystical entity with superhuman intelligence? Most currently deployed AI solutions are based on well-established, relatively elementary algorithms (e.g., linear regression, nearest neighbor classifier, k-means clustering). The basic principles underlying these algorithms can be understood even without advanced mathematics or programming skills.
   The Elements of AI is a series of online courses and an European Union wide (and beyond) AI literacy campaign with over 600 000 registered users to date. We describe the motivation and the design principles underlying the course and the campaign, its reception by the users, and its impact. The key message is that in order to build inclusive and equitable AI for social good, we need to make sure anyone can understand the basic principles, and that this can be achieved in a scalable, engaging, and fun way.

10:00am–10:45am UTC: Long talks

10:45am–11:00am UTC: Short talks

11:00am–11:10am UTC: Break

11:10am–11:40am UTC: Long talks

11:40am–12:00pm UTC: Short talks

12:00pm–1:00pm UTC: Poster session: Map

Invited speakers

Teemu Roos (University of Helsinki)

Teemu RoosTeemu Roos (photo credit: Maarit Kytöharju) is the lead instructor of the Elements of AI online course with over 580 000 students so far. The course has been ranked the world's best online course in AI and Computer Science (Class Central). He teaches data science and AI in the Master's Programme in Data Science at the University of Helsinki, and he is also the leader of the AI Education programme of the Finnish Center for AI. His research focuses on the theory and applications of machine learning.

Roos has held visiting positions at MIT, the University of California, Berkeley, and the University of Cambridge. He was nominated one of the World Summit AI's Top 50 Innovators of 2020. He also serves on the steering group of the Government of Finland's AI 4.0 Programme and the advisory board of the 100 Brilliant Women in AI Ethics list, as well as the steering committee of Integrify, a Helsinki-based startup training immigrants as software developers and data scientists.

Phebe Vayanos (University of Southern California)

Phebe VayanosPhebe Vayanos is an Assistant Professor of Industrial & Systems Engineering and Computer Science at the University of Southern California. She is also an Associate Director of CAIS, the Center for Artificial Intelligence in Society, an interdisciplinary research initiative between the schools of Engineering and Social Work at USC. Her research is focused on Operations Research and Artificial Intelligence and in particular on optimization and machine learning. Her work is motivated by problems that are important for social good, such as those arising in public housing allocation, public health, and biodiversity conservation. Prior to joining USC, she was lecturer in the Operations Research and Statistics Group at the MIT Sloan School of Management, and a postdoctoral research associate in the Operations Research Center at MIT. She holds a PhD degree in Operations Research and an MEng degree in Electrical & Electronic Engineering, both from Imperial College London. She served as a member of the ad hoc INFORMS AI Strategy Advisory Committee and is an elected member of the Committee on Stochastic Programming (COSP). She is a recipient of the INFORMS Diversity, Equity, and Inclusion Ambassador Program Award.

Accepted Papers

Lynn Miller, Christoph Rüdiger and Geoffrey Webb. Using AI and Satellite Earth Observation to Monitor UN Sustainable Development Indicators.

Call for Papers

The IJCAI 2020 Workshop on AI for Social Good will explore how artificial intelligence can contribute to solving social problems. For example, what role can AI play in promoting health, access to opportunity, and sustainable development? How can AI initiatives be deployed in an ethical, inclusive, and accountable manner? To address such questions, this workshop will bring together researchers and practitioners across artificial intelligence and a range of application domains. The objective is to share the current state of research and practice, explore directions for future work, and create opportunities for collaboration. The workshop will feature a mix of invited talks, contributed talks, and posters. Submissions spanning the full range of theoretical and applied work are encouraged. Topics of interest include, but are not limited to:

  • Democracy
  • Developing world
  • Health
  • Environmental sustainability
  • Ethics
  • Fairness and biases

Such questions are particularly salient in light of the COVID-19 pandemic. AI has an important role to play in providing insight into the course of the epidemic and developing targeted responses; we encourage submissions from both AI researchers as well as epidemiologists, health policy researchers, and other domain experts who are interested in engaging with the AI community.

Submissions are due October 16th, AoE, via EasyChair. We solicit papers in two categories:

  • Research papers describing novel contributions in either the development of AI techniques (motivated by societal applications), or their deployment in practice. Both work in progress and recently published work will be considered. Submissions describing recently published work should clearly indicate the earlier venue and provide a link to the published paper. Papers in this category should be at most 4 pages, with unlimited additional pages containing only references.

  • Position papers describing open problems or neglected perspectives on the field, proposing ideas for bringing computational methods into a new application area, or summarizing the focus areas of a group working on AI for social good. Papers in this category should be at most 3 pages, with unlimited additional pages containing only references.

All papers should be submitted in IJCAI format. The workshop will not have a formal published proceedings, but we will provide links to accepted papers along with the program. Accepted papers will be selected for oral and poster presentation based on peer review. Submissions are not double-blind; the submitted paper should include author names and affiliations.

Accepted Papers

Jan 7, 7 pm EST session

Lijing Wang, Aniruddha Adiga, Adam Sadilek, Xue Ben, Ashish Tendulkar, Srinivasan Venkatramanan, Anil Vullikanti, Gaurav Aggarwal, Alok Talekar, Jiangzhuo Chen, Bryan Lewis, Samarth Swarup, Amol Kapoor, Milind Tambe and Madhav Marathe. Using Mobility Data to Understand and Forecast COVID19 Dynamics.

Shima Gerani, Raphael Tissot, Annie Ying, Jennifer Redmon, Artemio Rimando and Riley Hun. Reducing suicide contagion effect by detecting sentences from media reports with explicit methods of suicide.

Rediet Abebe, Jon Kleinberg and Andrew Wang. Understanding and Measuring Income Shocks as Precursors to Poverty.

Zhigang Zhu, Vishnu Nair, Greg Olmschenk and Bill Seiple. ASSIST: Assistive Sensor Solutions for Independent and Safe Travel of Blind and Visually Impaired People.

Vinith Suriyakumar, Nicolas Papernot, Anna Goldenberg and Marzyeh Ghassemi. Challenges of Differentially Private Prediction in Healthcare Settings.

Connor Lawless and Oktay Gunluk. Fair and Interpretable Decision Rules for Binary Classification.

Kunal Khadilkar and Ashiqur Khudabukhsh. An Unfair Affinity Toward Fairness: Characterizing 70 Years of Social Biases in BHollywood.

Benjamin Choi and John Kamalu. Crowd-Sourced Road Quality Mapping in the Developing World.

Robin Burke, Amy Voida, Nicholas Mattei, Nasim Sonboli and Farzad Eskandanian. Algorithmic Fairness, Institutional Logics, and Social Choice.

Catherine Kung and Renzhe Yu. Interpretable Models Do Not Compromise Accuracy or Fairness in Predicting College Success.

Xingyu Chen and Zijie Liu. The Fairness of Leximin in Allocation of Indivisible Chores.

Irina Tolkova. Feature Representations for Conservation Bioacoustics: Review and Discussion.

Yu Liang and Amulya Yadav. Efficient COVID-19 Testing Using POMDPs.

Jawad Haqbeen, Takayuki Ito and Sofia Sahab. AI-based Mediation Improves Opinion Solicitation in a Large-scale Online Discussion: Experimental evidence from Kabul Municipality.

Junpei Komiyama and Shunya Noda. On Statistical Discrimination as a Failure of Social Learning: A Multi-Armed Bandit Approach.

Harshit Mishra, Namrata Madan Nerli and Sucheta Soundarajan. Keyword Recommendation for Fair Search.

Rediet Abebe, Christian Ikeokwu and Sam Taggart. Robust Welfare Guarantees for Decentralized Credit Organizations.

Ancil Crayton, João Fonseca, Kanav Mehra, Michelle Ng, Jared Ross, Marcelo Sandoval-Castañeda and Rachel von Gnechten. Narratives and Needs: Analyzing Experiences of Cyclone Amphan Using Twitter Discourse.

Helen Smith. Artificial Intelligence to Inform Clinical Decision Making: A Practical Solution to An Ethical And Legal Challenge.

Prathik Rao, Bistra Dilkina, Jordan P. Davis and John Prindle. Predicting Relapse Rates of Patients in Treatment for Substance Abuse via Survival Analysis (In Progress).

Aaron Ferber, Umang Gupta, Greg Ver Steeg and Bistra Dilkina. Differentiable Optimal Adversaries for Learning Fair Representations.

Jan 8, 3 am EST session

Umang Gupta, Aaron Ferber, Bistra Dilkina and Greg Ver Steeg. Controllable Guarantees for Fair Outcomes via Contrastive Information Estimation.

Charles Kantor, Brice Rauby, Léonard Boussioux, Marta Skreta, Emmanuel Jehanno, Alexandra Luccioni, David Rolnick and Hugues Talbot. Geo-Spatiotemporal Features and Shape-Based Prior Knowledge for Fine-Grained Classification.

Sushant Agarwal. Trade-Offs between Fairness and Privacy in Machine Learning.

Sushant Agarwal. Trade-Offs between Fairness and Interpretability in Machine Learning.

Masoomali Fatehkia, Benjamin Coles, Ferda Ofli and Ingmar Weber. The Relative Value of Facebook Advertising Data for Poverty Mapping.

Francesca Foffano, Teresa Scantamburlo, Atia Cortés and Chiara Bissuolo. European Strategy on AI: Are we truly fostering social good?

Shagun Gupta, Marie Charpignon, Jessica Malaty Rivera, Divya Ramjee, Sara Mannan, Angel Desai and Maimuna Majumder. Assessing the Influence of Political Leaning on COVID-19 Media Coverage: A Sentiment Analysis and Topic Modeling Study. (To be published)

Deepak P, Sanil V and Joemon Jose. On Fairness and Interpretability.

Tine Kolenik and Matjaž Gams, Increasing Mental Health Care Access with Persuasive Technology for Social Good.

Rupak Sarkar, Sayantan Mahinder, Hirak Sarkar and Ashiqur Khudabukhsh. Social Media Attributions in the Context of Water Crisis.

Wuraola Oyewusi, Olubayo Adekanmbi and Olalekan Akinsande. Semantic Enrichment of Nigerian Pidgin English for Contextual Sentiment Classification.

Arpita Biswas, Gaurav Aggarwal, Pradeep Varakantham and Milind Tambe. Learning Restless Bandits in Application to Call-based Preventive Care Programs for Maternal Healthcare.

Satyam Mohla, Bishnupriya Bagh and Anupam Guha. A Material Lens to Investigate the Gendered Impact of the AI Industry.

Raphael Leung. How can computer vision widen the evidence base around on-screen representation.

Katelyn Morrison. Reducing Discrimination in Learning Algorithms for Social Good in Sociotechnical Systems.

Jonathan Scarlett, Nicholas Teh and Yair Zick. For One and All: Individual and Group Fairness in the Allocation of Indivisible Goods.

Invited Speakers

Joanna Bryson (Hertie School of Governance)

What Is Good? Social Impacts and Digital Governance

Bio: Joanna Bryson is Professor of Ethics and Technology at Hertie School of Governance in Berlin recognised for broad expertise on intelligence, its nature, and its consequences. She advises governments, transnational agencies, and NGOs globally, particularly in AI policy. Most recently, she has been selected to represent Germany at the Global Partnership on AI. She holds two degrees each in psychology and AI (BA Chicago, MSc & MPhil Edinburgh, PhD MIT). Her work has appeared in venues ranging from reddit to the journal Science. From 2002-19 she was Computer Science faculty at the University of Bath; she has also been affiliated with Harvard Psychology, Oxford Anthropology, The Mannheim Centre for Social Science Research, The Konrad Lorenz Institute for Evolution and Cognition Research, and the Princeton Center for Information Technology Policy. During her PhD she first observed the confusion generated by anthropomorphised AI, leading to her first AI ethics publication “Just Another Artifact” in 1998. She has remained active in the field including coauthoring the first national-level AI ethics policy, the UK’s (2011) Principles of Robotics. She continues to research both the systems engineering of AI and the cognitive science of intelligence, with present focusses on the impact of technology on human cooperation, and new models of governance for AI and ICT.

Organization

Organizing Committee

  • Arpita Biswas (Indian Institute of Science)
  • Eric Horvitz (Microsoft Research)
  • Andrew Perrault* (Harvard University)
  • Sekou Remy (IBM Research Africa)
  • Sofia Segkouli (Information Technologies Institute, Thessaloniki, Greece)
  • Andreas Theodorou (Umeå University)
  • Bryan Wilder (Harvard University)

*primary contact: aperrault@g.harvard.edu

Travel Awards

This iteration of the workshop is entirely virtual and will not have a travel awards program.