IJCAI 2020 AI for Social Good workshop

Date: 

Friday, January 8, 2021 (All day)

Location: 

This workshop is entirely virtual

Join us at IJCAI on January 8 for the AI for Social Good workshop. This event will explore how artificial intelligence can contribute to solving social problems and will bring together researchers and practitioners across artificial intelligence and a range of application domains.

Accepted Papers Call for Papers | Invited Speakers Organization | Accepted PapersProgram Travel Awards

Sign up for the mailing list for the workshop here.

Artificial intelligence is poised to play an increasingly large role in societies across the world. Accordingly, there is a growing interest in ensuring that AI is used in a responsible and beneficial manner. A range of perspectives and contributions are needed, spanning the full spectrum from fundamental research to sustained deployments.

This workshop will explore how artificial intelligence can contribute to solving social problems. For example, what role can AI play in promoting health, access to opportunity, and sustainable development? How can AI initiatives be deployed in an ethical, inclusive, and accountable manner? To address such questions, this workshop will bring together researchers and practitioners across artificial intelligence and a range of application domains. The objective is to share the current state of research and practice, explore directions for future work, and create opportunities for collaboration.

Such questions are particularly salient in light of the COVID-19 pandemic. AI has an important role to play in providing insight into the course of the epidemic and developing targeted responses; we encourage submissions from both AI researchers as well as epidemiologists, health policy researchers, and other domain experts who are interested in engaging with the AI community.

Accepted Papers

Lynn Miller, Christoph Rüdiger and Geoffrey Webb. Using AI and Satellite Earth Observation to Monitor UN Sustainable Development Indicators.

Call for Papers

The IJCAI 2020 Workshop on AI for Social Good will explore how artificial intelligence can contribute to solving social problems. For example, what role can AI play in promoting health, access to opportunity, and sustainable development? How can AI initiatives be deployed in an ethical, inclusive, and accountable manner? To address such questions, this workshop will bring together researchers and practitioners across artificial intelligence and a range of application domains. The objective is to share the current state of research and practice, explore directions for future work, and create opportunities for collaboration. The workshop will feature a mix of invited talks, contributed talks, and posters. Submissions spanning the full range of theoretical and applied work are encouraged. Topics of interest include, but are not limited to:

  • Democracy
  • Developing world
  • Health
  • Environmental sustainability
  • Ethics
  • Fairness and biases

Such questions are particularly salient in light of the COVID-19 pandemic. AI has an important role to play in providing insight into the course of the epidemic and developing targeted responses; we encourage submissions from both AI researchers as well as epidemiologists, health policy researchers, and other domain experts who are interested in engaging with the AI community.

Submissions are due October 16th, AoE, via EasyChair. We solicit papers in two categories:

  • Research papers describing novel contributions in either the development of AI techniques (motivated by societal applications), or their deployment in practice. Both work in progress and recently published work will be considered. Submissions describing recently published work should clearly indicate the earlier venue and provide a link to the published paper. Papers in this category should be at most 4 pages, with unlimited additional pages containing only references.

  • Position papers describing open problems or neglected perspectives on the field, proposing ideas for bringing computational methods into a new application area, or summarizing the focus areas of a group working on AI for social good. Papers in this category should be at most 3 pages, with unlimited additional pages containing only references.

All papers should be submitted in IJCAI format. The workshop will not have a formal published proceedings, but we will provide links to accepted papers along with the program. Accepted papers will be selected for oral and poster presentation based on peer review. Submissions are not double-blind; the submitted paper should include author names and affiliations.

Accepted Papers

 

Masoomali Fatehkia, Benjamin Coles, Ferda Ofli and Ingmar Weber. The Relative Value of Facebook Advertising Data for Poverty Mapping.

Robin Burke, Amy Voida, Nicholas Mattei, Nasim Sonboli and Farzad Eskandanian. Algorithmic Fairness, Institutional Logics, and Social Choice.

Zhigang Zhu, Vishnu Nair, Greg Olmschenk and Bill Seiple. ASSIST: Assistive Sensor Solutions for Independent and Safe Travel of Blind and Visually Impaired People.

Vinith Suriyakumar, Nicolas Papernot, Anna Goldenberg and Marzyeh Ghassemi. Challenges of Differentially Private Prediction in Healthcare Settings.

Xingyu Chen and Zijie Liu. The Fairness of Leximin in Allocation of Indivisible Chores.

Ancil Crayton, João Fonseca, Kanav Mehra, Michelle Ng, Jared Ross, Marcelo Sandoval-Castañeda and Rachel von Gnechten. Narratives and Needs: Analyzing Experiences of Cyclone Amphan Using Twitter Discourse.

Charles Kantor, Brice Rauby, Léonard Boussioux, Marta Skreta, Emmanuel Jehanno, Alexandra Luccioni, David Rolnick and Hugues Talbot. Geo-Spatiotemporal Features and Shape-Based Prior Knowledge for Fine-Grained Classification.

Satyam Mohla, Bishnupriya Bagh and Anupam Guha. A Material Lens to Investigate the Gendered Impact of the AI Industry.

Junpei Komiyama and Shunya Noda. On Statistical Discrimination as a Failure of Social Learning: A Multi-Armed Bandit Approach.

Jonathan Scarlett, Nicholas Teh and Yair Zick. For One and All: Individual and Group Fairness in the Allocation of Indivisible Goods.

Yu Liang and Amulya Yadav. Efficient COVID-19 Testing Using POMDPs.

Deepak P, Sanil V and Joemon Jose. On Fairness and Interpretability.

Selam Waktola, Francesca Castagnoli, Marlies Nowee, Edwin Jansen and Regina Beets-Tan. Predicting radiation therapy outcome for liver colorectal cancer by using machine learning.

Prathik Rao, Bistra Dilkina, Jordan P. Davis and John Prindle. Predicting Relapse Rates of Patients in Treatment for Substance Abuse via Survival Analysis (In Progress).

Helen Smith. Artificial Intelligence to Inform Clinical Decision Making: A Practical Solution to An Ethical And Legal Challenge.

Sushant Agarwal. Trade-Offs between Fairness and Privacy in Machine Learning.

Sushant Agarwal. Trade-Offs between Fairness and Interpretability in Machine Learning.

Arpita Biswas, Gaurav Aggarwal, Pradeep Varakantham and Milind Tambe. Learning Restless Bandits in Application to Call-based Preventive Care Programs for Maternal Healthcare.

Aaron Ferber, Umang Gupta, Greg Ver Steeg and Bistra Dilkina. Differentiable Optimal Adversaries for Learning Fair Representations.

Harshit Mishra, Namrata Madan Nerli and Sucheta Soundarajan. Keyword Recommendation for Fair Search.

Lijing Wang, Aniruddha Adiga, Adam Sadilek, Xue Ben, Ashish Tendulkar, Srinivasan Venkatramanan, Anil Vullikanti, Gaurav Aggarwal, Alok Talekar, Jiangzhuo Chen, Bryan Lewis, Samarth Swarup, Amol Kapoor, Milind Tambe and Madhav Marathe. Using Mobility Data to Understand and Forecast COVID19 Dynamics.

Connor Lawless and Oktay Gunluk. Fair and Interpretable Decision Rules for Binary Classification.

Jawad Haqbeen, Takayuki Ito and Sofia Sahab. AI-based Mediation Improves Opinion Solicitation in a Large-scale Online Discussion: Experimental evidence from Kabul Municipality.

Rediet Abebe, Christian Ikeokwu and Sam Taggart. Robust Welfare Guarantees for Decentralized Credit Organizations.

Francesca Foffano, Teresa Scantamburlo, Atia Cortés and Chiara Bissuolo. European Strategy on AI: Are we truly fostering social good?

Wuraola Oyewusi, Olubayo Adekanmbi and Olalekan Akinsande. Semantic Enrichment of Nigerian Pidgin English for Contextual Sentiment Classification.

Katelyn Morrison. Reducing Discrimination in Learning Algorithms for Social Good in Sociotechnical Systems.

Raphael Leung. How can computer vision widen the evidence base around on-screen representation.

Shima Gerani, Raphael Tissot, Annie Ying, Jennifer Redmon, Artemio Rimando and Riley Hun. Reducing suicide contagion effect by detecting sentences from media reports with explicit methods of suicide.

Kunal Khadilkar and Ashiqur Khudabukhsh. An Unfair Affinity Toward Fairness: Characterizing 70 Years of Social Biases in B$^{H}$ollywood.

Catherine Kung and Renzhe Yu. Interpretable Models Do Not Compromise Accuracy or Fairness in Predicting College Success.

Irina Tolkova. Feature Representations for Conservation Bioacoustics: Review and Discussion.

Rediet Abebe, Jon Kleinberg and Andrew Wang. Understanding and Measuring Income Shocks as Precursors to Poverty.

Benjamin Choi and John Kamalu. Crowd-Sourced Road Quality Mapping in the Developing World.

Shagun Gupta, Marie Charpignon, Jessica Malaty Rivera, Divya Ramjee, Sara Mannan, Angel Desai and Maimuna Majumder. Assessing the Influence of Political Leaning on COVID-19 Media Coverage: A Sentiment Analysis and Topic Modeling Study.

Umang Gupta, Aaron Ferber, Bistra Dilkina and Greg Ver Steeg. Controllable Guarantees for Fair Outcomes via Contrastive Information Estimation.

Rupak Sarkar, Sayantan Mahinder, Hirak Sarkar and Ashiqur Khudabukhsh. Social Media Attributions in the Context of Water Crisis.

Invited Speakers

Joanna Bryson (Hertie School of Governance)

What Is Good? Social Impacts and Digital Governance

Bio: Joanna Bryson is Professor of Ethics and Technology at Hertie School of Governance in Berlin recognised for broad expertise on intelligence, its nature, and its consequences. She advises governments, transnational agencies, and NGOs globally, particularly in AI policy. Most recently, she has been selected to represent Germany at the Global Partnership on AI. She holds two degrees each in psychology and AI (BA Chicago, MSc & MPhil Edinburgh, PhD MIT). Her work has appeared in venues ranging from reddit to the journal Science. From 2002-19 she was Computer Science faculty at the University of Bath; she has also been affiliated with Harvard Psychology, Oxford Anthropology, The Mannheim Centre for Social Science Research, The Konrad Lorenz Institute for Evolution and Cognition Research, and the Princeton Center for Information Technology Policy. During her PhD she first observed the confusion generated by anthropomorphised AI, leading to her first AI ethics publication “Just Another Artifact” in 1998. She has remained active in the field including coauthoring the first national-level AI ethics policy, the UK’s (2011) Principles of Robotics. She continues to research both the systems engineering of AI and the cognitive science of intelligence, with present focusses on the impact of technology on human cooperation, and new models of governance for AI and ICT.

Organization

Organizing Committee

  • Arpita Biswas (Indian Institute of Science)
  • Eric Horvitz (Microsoft Research)
  • Andrew Perrault* (Harvard University)
  • Sekou Remy (IBM Research Africa)
  • Sofia Segkouli (Information Technologies Institute, Thessaloniki, Greece)
  • Andreas Theodorou (Umeå University)
  • Bryan Wilder (Harvard University)

*primary contact: aperrault@g.harvard.edu

Travel Awards

This iteration of the workshop is entirely virtual and will not have a travel awards program.