AI for Social Good papers

2021
Killian JA, Perrault A, Tambe M. Beyond “To Act or Not to Act”: Fast Lagrangian Approaches to General Multi-Action Restless Bandits. IJCAI 2021 Workshop on AI for Social Good. 2021.Abstract
We present a new algorithm and theoretical results for solving Multi-action Multi-armed Restless Bandits, an important but insufficiently studied generalization of traditional Multi-armed Restless Bandits (MARBs). Multi-action MARBs are capable of handling critical problem complexities often present in AI4SG domains like anti-poaching and healthcare, but that traditional MARBs fail to capture. Limited previous work on Multi-action MARBs has only been specialized to sub-problems. Here we derive BLam, an algorithm for general Multi-action MARBs using Lagrangian relaxation techniques and convexity to quickly converge to good policies via bound optimization. We also provide experimental results comparing BLam to baselines on a simulated distributions motivated by a real-world community health intervention task, achieving up to five-times speedups over more general methods without sacrificing performance.
Beyond “To Act or Not to Act”: Fast Lagrangian Approaches to General Multi-Action Restless Bandits
Sarkar R, Mahinder S, Sarkar H, KhudaBukhsh AR. Social Media Attributions in the Context of Water Crisis, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
Attribution of natural disasters/collective misfortunes is a widely studied social science problem. At present, most such studies rely on surveys or external signals such as voting outcomes. Typically, these surveys are costly to conduct and often have considerable turnaround time. In contrast, procuring social media data is vastly cheaper and can be obtained at varying spatiotemporal granularity. In this paper, we describe our recent work1 that looked into the viability of estimating attributions through social media discussions. To this end, (1) we fo- cus on the 2019 Chennai water crisis, a major in- stance of recent environmental resource crisis; (2) construct a substantial corpus of 72,098 YouTube comments posted by 43,859 users on 623 videos relevant to the crisis; (3) define a novel natural language processing task of attribution tie detection; and (4) design a neural classifier that achieves a reasonable performance. We also release the first data set on this novel task and important domain.
Social Media Attributions in the Context of Water Crisis
Gupta U, Ferber A, Dilkina B, Steeg GV. Controllable Guarantees for Fair Outcomes via Contrastive Information Estimation, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
Controlling bias in training datasets is vital for ensuring equal treatment, or parity, between different groups in downstream applications. A naive solution is to transform the data so that it is statistically independent of group membership, but this may throw away too much information when a reasonable compromise between fairness and accuracy is desired. Another common approach is to limit the ability of a particular adversary who seeks to maximize parity. Unfortunately, representations produced by adversarial approaches may still retain biases as their efficacy is tied to the complexity of the adversary used during training. To this end, we theoretically establish that by limiting the mutual information between representations and protected attributes, we can assuredly control the parity of any downstream classifier. We demonstrate an effective method for controlling parity through mutual information based on contrastive information estimators and show that it outperforms other exist- ing approaches. We test our approach on UCI Adult and Heritage Health datasets and show that our approach provides more informative representations across a range of desired parity thresholds while providing strong theoretical guarantees on the parity of any downstream algorithm.
Controllable Guarantees for Fair Outcomes via Contrastive Information Estimation
Choi B, Kamalu J. Crowd-Sourced Road Quality Mapping in the Developing World, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
Road networks are among the most essential components of a country’s infrastructure. By facilitating the movement and exchange of goods, people, and ideas, they support economic and cultural activity both within and across borders. Up- to-date mapping of the the geographical distribution of roads and their quality is essential in high- impact applications ranging from land use planning to wilderness conservation. Mapping presents a particularly pressing challenge in developing countries, where documentation is poor and disproportionate amounts of road construction are expected to occur in the coming decades. We present a new crowd-sourced approach capable of assessing road quality and identify key challenges and opportunities in the transferability of deep learning based methods across domains.
Crowd-Sourced Road Quality Mapping in the Developing World
Tolkova I. Feature Representations for Conservation Bioacoustics: Review and Discussion, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
Acoustic analysis is becoming a key element of environmental monitoring for wildlife conservation. Passive acoustic recorders can document a variety of vocal animals over large areas and long time horizons, paving the path for machine learning algorithms to identify individual species, estimate abundance, and evaluate ecosystem health. How- ever, such techniques rely on finding meaningful characterizations of calls and soundscapes, capable of capturing complex spatiotemporal, taxonomic, and behavioral structure. This article reviews existing methods for computing informative lower- dimensional features in the context of terrestrial passive acoustic monitoring, and discusses directions for further work.
Feature Representations for Conservation Bioacoustics: Review and Discussion
Kung C, Yu R. Interpretable Models Do Not Compromise Accuracy or Fairness in Predicting College Success, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
The presence of “big data” in higher education has led to the increasing popularity of predictive analytics for guiding various stakeholders on appropriate actions to support student success. In develop- ing such applications, model selection is a central issue. As such, this study presents a comprehensive examination of five commonly used machine learning models in student success prediction. Us- ing administrative and learning management system (LMS) data for nearly 2,000 college students at a public university, we employ the models to predict short-term and long-term academic success. Beyond the tradeoff between model interpretability and accuracy, we also focus on the fairness of these models with regard to different student populations. Our findings suggest that more interpretable mod- els such as logistic regression do not necessarily compromise predictive accuracy. Also, they lead to no more, if not less, prediction bias against disadvantaged student groups than complicated models. Moreover, prediction biases against certain groups persist even in the fairest model. These results thus recommend using simpler algorithms in conjunction with human evaluation in instructional and institutional applications of student success prediction when valid student features are in place.
Interpretable Models Do Not Compromise Accuracy or Fairness in Predicting College Success
Khadilkar K, KhudaBukhsh AR. An Unfair Affinity Toward Fairness: Characterizing 70 Years of Social Biases in BHollywood, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
Bollywood, aka the Mumbai film industry, is one of the biggest movie industries in the world. With a current movie market share of worth 2.1 billion dollars and a target audience base of 1.2 billion people, Bollywood is a formidable entertainment force. While the entertainment impact in terms of lives that Bollywood can potentially touch is mammoth, no NLP study on social biases in Bollywood con- tent exists. We thus seek to understand social biases in a developing country through the lens of popular movies. Our argument is simple – popular movie content reflects social norms and beliefs in some form or shape. We present our preliminary findings on a longitudinal corpus of English subtitles of popular Bollywood movies focusing on (1) social bias toward a fair skin color (2) gender biases, and (3) gender representation. We contrast our findings with a similar corpus of Hollywood movies.
An Unfair Affinity Toward Fairness: Characterizing 70 Years of Social Biases in BHollywood
Gerani S, Tissot R, Ying A, Redmon J, Rimando A, Hun R. Reducing suicide contagion effect by detecting sentences from media reports with explicit methods of suicide, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
Research has shown that suicide rate can increase by 13% when suicide is not reported responsibly. For example, irresponsible reporting includes specific details that depict suicide methods. To pro- mote more responsible journalism to save lives, we propose a novel problem called “suicide method detection”, which determines if a sentence in a news article contains a description of a suicide method. Our results show two promising approaches: a rule-based approach using category pattern matching and a BERT model with data augmentation, both of which reach over 0.9 in F- measure.
Reducing suicide contagion effect by detecting sentences from media reports with explicit methods of suicide
Leung R. How can computer vision widen the evidence base around on-screen representation, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
There is strong demand for more complete and better data around diversity in the screen industries. Focusing on on-screen diversity and representation in the UK, the evidence base around representation on-screen has been narrow so far. Diversity evaluation needs to consider more than on-screen presence – it should also consider prominence and portrayal. In this position paper, the ethics of applying computer vision to study on-screen characters is discussed via a conceptual framework of on- screen diversity metrics. Computer vision should be applied to identify character occurrences, rather than demographic classification. An illustrative ex- ample of measuring character prominence using a short video clip is shown. It concludes with four areas of applications where adopting computational methods can create a measurably more inclusive and representative broadcast landscape.
How can computer vision widen the evidence base around on-screen representation
Morrison K. Reducing Discrimination in Learning Algorithms for Social Good in Sociotechnical Systems, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
Sociotechnical systems within cities are now equipped with machine learning algorithms in hopes to increase efficiency and functionality by modeling and predicting trends. Machine learning algorithms have been applied in these domains to address challenges such as balancing the distribution of bikes throughout a city and identifying demand hotspots for ride sharing drivers. However, these algorithms applied to challenges in sociotechnical systems have exacerbated social inequalities due to previous bias in data sets or the lack of data from marginalized communities. In this paper, I will address how smart mobility initiatives in cities use machine learning algorithms to address challenges. I will also address how these algorithms unintentionally discriminate against features such as socioeconomic status to motivate the importance of algorithmic fairness. Using the bike sharing pro- gram in Pittsburgh, PA, I will present a position on how discrimination can be eliminated from the pipeline using Bayesian Optimization.
Reducing Discrimination in Learning Algorithms for Social Good in Sociotechnical Systems
Oyewusi WF, Adekanmbi O, Akinsande O. Semantic Enrichment of Nigerian Pidgin English for Contextual Sentiment Classification, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
Nigerian English adaptation, Pidgin, has evolved over the years through multi-language code switch- ing, code mixing and linguistic adaptation. While Pidgin preserves many of the words in the normal English language corpus, both in spelling and pronunciation, the fundamental meaning of these words have changed significantly. For example, ‘ginger’ is not a plant but an expression of motivation and ’tank’ is not a container but an expression of gratitude. The implication is that the current approach of using direct English sentiment analysis of social media text from Nigeria is sub-optimal, as it will not be able to capture the semantic variation and contextual evolution in the contemporary meaning of these words. In practice, while many words in Nigerian Pidgin adaptation are the same as the standard English, the full English language based sentiment analysis models are not de- signed to capture the full intent of the Nigerian pid- gin when used alone or code-mixed. By augment- ing scarce human labelled code-changed text with ample synthetic code-reformatted text and meaning, we achieve significant improvements in sentiment scoring. Our research explores how to understand sentiment in an intrasentential code mixing and switching context where there has been significant word localization.This work presents a 300 VADER lexicon compatible Nigerian Pidgin sentiment tokens and their scores and a 14,000 gold standard Nigerian Pidgin tweets and their sentiments labels.
Semantic Enrichment of Nigerian Pidgin English for Contextual Sentiment Classification
Foffano F, Scantamburlo T, Cortés A, Bissolo C. European Strategy on AI: Are we truly fostering social good?, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
Artificial intelligence (AI) is already part of our daily lives and is playing a key role in defining the economic and social shape of the future. In 2018, the European Commission introduced its AI strategy able to compete in the next years with world powers such as China and US, but relying on the respect of European values and fundamental rights. As a result, most of the Member States have published their own National Strategy with the aim to work on a coordinated plan for Europe. In this pa- per, we present an ongoing study on how European countries are approaching the field of Artificial Intelligence, with its promises and risks, through the lens of their national AI strategies. In particular, we aim to investigate how European countries are investing in AI and to what extent the stated plans can contribute to the benefit of the whole society. This paper reports the main findings of a qualitative analysis of the investment plans reported in 15 European National Strategies.
European Strategy on AI: Are we truly fostering social good?
Abebe R, Ikeokwu C, Taggart S. Robust Welfare Guarantees for Decentralized Credit Organizations, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
Rotating savings and credit associations (roscas) are informal financial organizations common in settings where communities have reduced access to formal financial institutions. In a rosca, a fixed group of participants regularly contribute small sums of money to a pool. This pool is then allocated periodically using lotteries or auction mechanisms. Roscas are empirically well-studied in the development economics literature. Due to their dynamic nature, however, roscas have proven challenging to examine theoretically. Theoretical analyses within economics have made strong assumptions about features such as the number or homogeneity of participants, the information they possess, their value for saving across time, or the number of rounds. This work presents an algorithmic study of roscas. We use techniques from the price of anarchy in auctions to characterize their welfare properties under less restrictive assumptions than previous work. Using the smoothness framework of [Syrgkanis and Tardos, 2013], we show that most common auction-based roscas have equilibrium welfare within a constant factor of the best possible. This evidence further rationalizes these organizations’ prevalence. Roscas present many further questions where algorithmic game theory may be helpful; we discuss several promising directions.
Robust Welfare Guarantees for Decentralized Credit Organizations
Haqbeen J, Ito T, Sahab S. AI-based Mediation Improves Opinion Solicitation in a Large-scale Online Discussion: Experimental evidence from Kabul Municipality, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
We present a large-scale case study using agent plat- form that facilitated and gathered public opinions on internet-based town discussion. The hypothesis set to test how agent-mediated argumentative messages leads the discussion structure in a “Issue-giving” and “Issue-solving” themes involving human precipitants. The agent facilitation’s mechanism set to dynamically react to participants by moderating and supporting on the bases of “issue-solving” stance in both discussion types. We conducted two large- scale experiments to evaluate the influence of agent mediation while looking at elements of both discus- sion themes. The first experiment themed as a “is- sue-giving” with 188 participants, and the second experiment set as a “issue-solving” with 1076 citizens from Afghanistan. The goal of the first experiment is to contribute insights about the scale of the issues the residents facing at districts 1 and 2. The goal of second experiment is to contribute insights about the scale of issues and their solutions. In the first experiment, we found that the due to participants started by taking part with theme stance “is- sue-giving” the first post of submitters were issues, hence the themed “issue-giving” increased the number of issues but when agent started posting facilitation messages, the participants stance changed from issue-giving to issue-solving stance, while in second experiment the participants stance remain the same as the theme type.
AI-based Mediation Improves Opinion Solicitation in a Large-scale Online Discussion: Experimental evidence from Kabul Municipality
Lawless C, Günlük O. Fair and Interpretable Decision Rules for Binary Classification, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
In this paper we consider the problem of building Boolean rule sets in disjunctive normal form (DNF), an interpretable model for binary classification, subject to fairness constraints. We formulate the problem as an integer program that maximizes classification accuracy with explicit constraints on two different measures of classification parity: equality of opportunity, and equalized odds. A column generation framework, with a novel formulation, is used to efficiently search over exponentially many possible rules, eliminating the need for heuristic rule mining. Compared to CART and Logistic Regression, two interpretable machine learning algorithms, our method produces interpretable classifiers that have superior performance with respect to both fair- ness metrics.
Fair and Interpretable Decision Rules for Binary Classification
Wang L, Ben X, Adiga A, Sadilek A, Tendulkar A, Venkatramanan S, Vullikanti A. Using Mobility Data to Understand and Forecast COVID19 Dynamics, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
Disease dynamics, human mobility, and public policies co-evolve during a pandemic such as COVID-19. Understanding dynamic human mobility changes and spatial interaction patterns are crucial for understanding and forecasting COVID- 19 dynamics. We introduce a novel graph-based neural network(GNN) to incorporate global aggregated mobility flows for a better understanding of the impact of human mobility on COVID-19 dynamics as well as better forecasting of disease dynamics. We propose a recurrent message passing graph neural network that embeds spatio-temporal disease dynamics and human mobility dynamics for daily state-level new confirmed cases forecast- ing. This work represents one of the early papers on the use of GNNs to forecast COVID-19 incidence dynamics and our methods are competitive to exist- ing methods. We show that the spatial and temporal dynamic mobility graph leveraged by the graph neural network enables better long-term forecasting performance compared to baselines.
Using Mobility Data to Understand and Forecast COVID19 Dynamics
Mishra H, Nerli NM, Soundarajan S. Keyword Recommendation for Fair Search, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
Online search engines are an extremely popular tool for seeking information. However, the results returned sometimes exhibit undesirable or even wrongful forms of bias, such as with respect to gender or race. In this paper, we consider the problem of fair keyword recommendation, in which the goal is to suggest keywords that are relevant to a user’s search query, but exhibit less (or opposite) bias. We present a multi-objective method using word embedding to suggest alternate keywords for biased keywords present in a search query. We perform a qualitative analysis on pairs of subReddits from Reddit.com (e.g., r/AskMen vs. r/AskWomen, r/Republican vs. r/democrats). Our results demonstrate the efficacy of the proposed method and illustrate subtle linguistic differences between subReddits.
Keyword Recommendation for Fair Search
Ferber A, Gupta U, Steeg GV, Dilkina B. Differentiable Optimal Adversaries for Learning Fair Representations, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
Fair representation learning is an important task in many real-world domains, with the goal of finding a performant model that obeys fairness requirements. We present an adversarial representation learning algorithm that learns an informative representation while not exposing sensitive features. Our goal is to train an embedding such that it has good performance on a target task while not exposing sensitive information as measured by the performance of an optimally trained adversary. Our approach directly trains the embedding with these dual objectives in mind by implicitly differentiating through the optimal adversary’s training procedure. To this end, we derive implicit gradients of the optimal logistic regression parameters with respect to the input training embeddings, and use the fully-trained logistic regression as an adversary. As a result, we are able to train a model without alternating min max optimization, leading to better training stability and improved performance. Given the flexibility of our module for differentiable programming, we evaluate the impact of using implicit gradients in two adversarial fairness-centric formulations. We present quantitative results on the trade-offs of target and fairness tasks in several real-world domains.
Differentiable Optimal Adversaries for Learning Fair Representations
Biswas A, Aggarwal G, Varakantham P, Tambe M. Learning Restless Bandits in Application to Call-based Preventive Care Programs for Maternal Healthcare, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
This paper focuses on learning index-based policies in rest- less multi-armed bandits (RMAB) with applications to public health concerns such as maternal health. Maternal health is a very important public health concern. It refers to the health of women during their pregnancy, childbirth, and the post- natal period. Although maternal health has received significant attention [World Health Organization, 2015], the number of maternal deaths remains unacceptably high, mainly because of the delay in obtaining adequate care [Thaddeus and Maine, 1994]. Most maternal deaths can be prevented by providing timely preventive care information. However, such information is not easily accessible by underprivileged and low-income communities. For ensuring timely information, a non-profit organization, called ARMMAN [2015], carries out a free call-based program called mMitra for spreading preventive care information among pregnant women. Enrollment in this program happens through hospitals and non-government organizations. Each enrolled woman receives around 140 automated voice calls, throughout their pregnancy period and up to 12 months after childbirth. Each call equips women with critical life-saving healthcare information. This program pro- vides support for around 80 weeks. To achieve the vision of improving the well-being of the enrolled women, it is important to ensure that they listen to most of the information sent to them via the automated calls. However, the organization observed that, for many women, their engagement (i.e., the overall time they spend listening to the automated calls) gradually decreases. One way to improve their engagement is by providing an intervention (that would involve a personal visit by health-care worker). These interventions require the dedicated time of the health workers, which is often limited. Thus, only a small fraction of the overall enrolled women can be provided with interventions during a time period. More- over, the extent to which the engagement improves upon intervention varies among individuals. Hence, it is important to carefully choose the beneficiaries who should be provided interventions at a particular time period. This is a challenging problem owing to multiple key reasons: (i) Engagement of the individual beneficiaries is un- certain and changes organically over time; (ii) Improvement in the engagement of a beneficiary post-intervention is un- certain; (iii) Decision making with respect to interventions (which beneficiaries should have intervention) is sequential, i.e., decisions at a step have an impact on the state of beneficiaries and decisions to be taken at the next step; (iv) Number of interventions are budgeted and are significantly smaller than the total number of beneficiaries. Due to the uncertainty, sequential nature of decision making, and weak dependency amongst patients through a budget, existing research [Lee et al., 2019; Mate et al., 2020; Bhattacharya, 2018] in health interventions has justifiably employed RMABs. However, existing research focuses on the planning problem assuming a priori knowledge of the underlying uncertainty model, which can be quite challenging to obtain. Thus, we focus on learning intervention decisions in absence of the knowledge of underlying uncertainty.
Agarwal S. Trade-Offs between Fairness and Interpretability in Machine Learning, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
In this work, we look at cases where we want a classifier to be both fair and interpretable, and find that it is necessary to make trade-offs between these two properties. We have theoretical results to demonstrate this tension between the two requirements. More specifically, we consider a formal framework to build simple classifiers as a means to attain interpretability, and show that simple classifiers are strictly improvable, in the sense that every simple classifier can be replaced by a more complex classifier that strictly improves both fairness and accuracy.
Trade-Offs between Fairness and Interpretability in Machine Learning

Pages