Publications

2021
P. D, V. S, Jose JM. On Fairness and Interpretability, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
Ethical AI spans a gamut of considerations. Among these, the most popular ones, fairness and interpretability, have remained largely distinct in technical pursuits. We discuss and elucidate the differences between fairness and interpretability across a variety of dimensions. Further, we develop two principles-based frameworks towards develop- ing ethical AI for the future that embrace aspects of both fairness and interpretability. First, interpretability for fairness proposes instantiating interpretability within the realm of fairness to develop a new breed of ethical AI. Second, fairness and interpretability initiates deliberations on bringing the best aspects of both together. We hope that these two frameworks will contribute to intensify- ing scholarly discussions on new frontiers of ethical AI that brings together fairness and interpretability.
On Fairness and Interpretability
Liang Y, Yadav A. Efficient COVID-19 Testing Using POMDPs, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract

A robust testing program is necessary for contain- ing the spread of COVID-19 infections before a vaccine becomes available. However, due to an acute shortage of testing kits (especially in low- resource developing countries), designing an opti- mal testing program/strategy is a challenging prob- lem to solve. Prior literature on testing strategies suffers from two major limitations: (i) it does not account for the trade-off between testing of symp- tomatic and asymptomatic individuals, and (ii) it primarily focuses on static testing strategies, which leads to significant shortcomings in the testing pro- gram’s effectiveness. In this paper, we introduced a scalable Monte Carlo tree search based algorithm named DOCTOR, and use it to generate the op- timal testing strategies for the COVID-19. In our experiment, DOCTOR’s strategies result in ∼40% fewer COVID-19 infections (over one month) as compared to state-of-the-art static baselines. Our work complements the growing body of research on COVID-19, and serves as a proof-of-concept that illustrates the benefit of having an AI-driven adaptive testing strategy for COVID-19.

Efficient COVID-19 Testing Using POMDPs
Fatehkia M, Coles B, Ofli F, Weber I. The Relative Value of Facebook Advertising Data for Poverty Mapping, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
Having reliable and up-to-date poverty data is a prerequisite for monitoring the United Nations Sustainable Development Goals (SDGs) and for planning effective poverty reduction interventions. Un- fortunately, traditional data sources are often out- dated or lacking appropriate disaggregation. As a remedy, satellite imagery has recently become prominent in obtaining geographically-fine-grained and up-to-date poverty estimates. Satellite data can pick up signals of economic activity by detecting light at night, it can pick up development status by detecting infrastructure such as roads, and it can pick up signals for individual household wealth by detecting different building footprints and roof types. It can, however, not look inside the house- holds and pick up signals from individuals. On the other hand, alternative data sources such as audience estimates from Facebook’s advertising plat- form provide insights into the devices and inter- net connection types used by individuals in differ- ent locations. Previous work has shown the value of such anonymous, publicly-accessible advertising data from Facebook for studying migration, gender gaps, crime rates, and health, among others. In this work, we evaluate the added value of using Face- book data over satellite data for mapping socioeconomic development in two low and middle income countries – the Philippines and India. We show that Facebook features perform roughly similar to satellite data in the Philippines with value added for ur- ban locations. In India, however, where Facebook penetration is lower, satellite data perform better.
The Relative Value of Facebook Advertising Data for Poverty Mapping
Scarlett J, Teh N, Zick Y. For One and All: Individual and Group Fairness in the Allocation of Indivisible Goods, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
Traditionally, research into the fair allocation of indivisible goods has focused on individual fairness and group fairness. In this paper, we explore the co-existence of individual envy-freeness (i-EF) and its group counterpart, group weighted envy-freeness (g-WEF). We propose several polynomial-time algorithms that can provably achieve i-EF and g-WEF simultaneously in various degrees of approximation under three different conditions on the agents’ valuation functions: (i) when agents have identical additive valuation functions, i-EFX and g-WEF1 can be achieved simultaneously; (ii) when agents within a group share a common valuation function, an allocation satisfying both i-EF1 and g-WEF1 exists; and (iii) when agents’ valuations for goods within a group differ, we show that while maintaining i-EF1, we can achieve a 1 - 3 approximation to g-WEF1 in expectation. In addition, we introduce several novel fairness characterizations that exploit inherent group structures and their relation to individuals, such as proportional envy-freeness and group stability. We show that our algorithms can guarantee these properties approximately in polynomial time. Our results thus provide a first step into connecting individual and group fairness in the allocation of indivisible goods.
For One and All: Individual and Group Fairness in the Allocation of Indivisible Goods
Komiyama J, Noda S. On Statistical Discrimination as a Failure of Social Learning: A Multi-Armed Bandit Approach, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
We analyze statistical discrimination using a multi-armed bandit model where myopic firms face candidate workers arriving with heterogeneous observable characteristics. The association between the worker’s skill and characteristics is unknown ex ante; thus, firms need to learn it. In such an environment, laissez- faire may result in a highly unfair and inefficient outcome—myopic firms are reluctant to hire minority workers because the lack of data about minority workers prevents accurate estimation of their performance. Consequently, minority groups could be perpetually underestimated—they are never hired, and therefore, data about them is never accumulated. We proved that this problem becomes more seri- ous when the population ratio is imbalanced, as is the case in many extant discrimination problems. We consider two affirmative-action policies for solving this dilemma: One is a subsidy rule that is based on the popular upper confidence bound algorithm, and another is the Rooney Rule, which requires firms to interview at least one minority worker for each hiring opportunity. Our results indicate temporary affirmative actions are effective for statistical dis- crimination caused by data insufficiency.
On Statistical Discrimination as a Failure of Social Learning: A Multi-Armed Bandit Approach
Mohla S, Bagh B, Guha A. A Material Lens to Investigate the Gendered Impact of the AI Industry, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract

Artificial Intelligence (AI), as a collection of tech- nologies, but more so as a growing component of the global mode of production, has a significant im- pact on gender, specifically gendered labour. In this position paper we argue that the dominant aspect of AI industry’s impact on gender is more that the pro- duction and reproduction of epistemic biases which is the focus of contemporary research but is rather a material impact. We draw attention to how as a part of a larger economic structure the AI industry is altering the nature of work, expanding platformi- sation, and thus increasing precarity which is push- ing women out of the labour force. We state that this is a neglected concern and specific challenge worthy of attention for the AI research community.

A Material Lens to Investigate the Gendered Impact of the AI Industry
Kantor CA, Skreta M, Rauby B, Boussioux L, Jehanno E, Luccioni A, Rolnick D, Talbot H. Geo-Spatiotemporal Features and Shape-Based Prior Knowledge for Fine-grained Imbalanced Data Classification, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
Fine-grained classification aims at distinguishing between items with similar global perception and patterns, but that differ by minute details. Our primary challenges comes from both small inter-class variations and large intra-class variations. In this article, we propose to combine several innovations to improve fine-grained classification within the use-case of fauna, which is of practical interest for experts. We utilize geo-spatiotemporal data to enrich the picture information and further improve the performance. We also investigate state-of-the-art methods for handling the imbalanced data issue.
Geo-spatiotemporal Features and Shape-based Prior Knowledge for Fine-grained Imbalanced Data Classification
Crayton A, Fonseca J, Mehra K, Ng M, Ross J, Sandoval-Castañeda M, von Gnecht R. Narratives and Needs: Analyzing Experiences of Cyclone Amphan Using Twitter Discourse, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
People often turn to social media to comment upon and share information about major global events. Accordingly, social media is receiving increasing attention as a rich data source for understanding people’s social, political and economic experiences of extreme weather events. In this paper, we con- tribute two novel methodologies that leverage Twitter discourse to characterize narratives and identify unmet needs in response to Cyclone Amphan, which affected 18 million people in May 2020.
Narratives and Needs: Analyzing Experiences of Cyclone Amphan Using Twitter Discourse
Kolenik T, Gams M. Increasing Mental Health Care Access with Persuasive Technology for Social Good, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
The alarming trend of increasing mental health problems and the global inability to find effective ways to address them is hampering both individual and societal good. Barriers to access mental health care are many and high, ranging from socio- economic inequalities to personal stigmas. This gives technology, especially technology based in artificial intelligence, the opportunity to help alleviate the situation and offer unique solutions. The multi- and interdisciplinary research on persuasive technology, which attempts to change behavior or attitudes without deception or coercion, shows promise in improving well-being, which results in increased equality and social good. This paper presents such systems with a brief overview of the field, and offers general, technical and critical thoughts on the implementation as well as impact. We believe that such technology can complement existing mental health care solutions to reduce inequalities in access as well as inequalities resulting from the lack of it.
Increasing Mental Health Care Access with Persuasive Technology for Social Good
Chen X, Liu Z. The Fairness of Leximin in Allocation of Indivisible Chores, in ; 2021.Abstract
The leximin solution — which selects an allocation that maximizes the minimum utility, then the second minimum utility, and so forth — is known to provide EFX (envy-free up to any good) fairness guarantee in some contexts when allocating indivisible goods. However, it remains unknown how fair the leximin solution is when used to allocate in- divisible chores. In this paper, we demonstrate that the leximin solution can be modified to also provide compelling fairness guarantees for the allocation of indivisible chores. First, we generalize the definition of the leximin solution. Then, we show that the leximin solution finds a PROP1 (proportional up to one good) and PO (Pareto-optimal) allocation for 3 or 4 agents in the context of chores allocation with additive distinct valuations. Additionally, we prove that the leximin solution is EFX for combinations of goods and chores for agents with general but identical valuations.
The Fairness of Leximin in Allocation of Indivisible Chores
Suriyakumar VM, Papernot N, Goldenberg A, Ghassemi M. Challenges of Differentially Private Prediction in Healthcare Settings, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
Privacy-preserving machine learning is becoming increasingly important as models are being used on sensitive data such as electronic health records. Differential privacy is considered the gold standard framework for achieving strong privacy guarantees in machine learning. Yet, the performance implications of learning with differential privacy have not been characterized in the presence of time-varying hospital policies, care practices, and known class imbalance present in health data. First, we demon- strate that due to the long-tailed nature of health- care data, learning with differential privacy results in poor utility tradeoffs. Second, we demonstrate through an application of influence functions that learning with differential privacy leads to disproportionate influence from the majority group on model predictions which results in negative consequences for utility and fairness. Our results high- light important implications of differentially private learning; which focuses by design on learn- ing the body of a distribution to protect privacy but omits important information contained in the tails of healthcare data distributions.
Challenges of Differentially Private Prediction in Healthcare Settings
Zhu Z, Nair V, Olmschenk G, Seiple WH. ASSIST: Assistive Sensor Solutions for Independent and Safe Travel of Blind and Visually Impaired People, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
This paper describes the interface and testing of an indoor navigation app - ASSIST - that guides blind & visually impaired (BVI) individuals through an indoor environment with high accuracy while augmenting their understanding of the surrounding environment. ASSIST features personalized inter- faces by considering the unique experiences that BVI individuals have in indoor wayfinding and of- fers multiple levels of multimodal feedback. After an overview of the technical approach and imple- mentation of the first prototype of the ASSIST system, the results of two pilot studies performed with BVI individuals are presented. Our studies show that ASSIST is useful in providing users with navigational guidance, improving their efficiency and (more significantly) their safety and accuracy in wayfinding indoors.
ASSIST: Assistive Sensor Solutions for Independent and Safe Travel of Blind and Visually Impaired People
2020
Xu L, Bondi E, Fang F, Perrault A, Wang K, Tambe M. Dual-Mandate Patrols: Multi-Armed Bandits for Green Security. arXiv:2009.06560 [cs, stat]. 2020. Publisher's VersionAbstract
Conservation efforts in green security domains to protect wildlife and forests are constrained by the limited availability of defenders (i.e., patrollers), who must patrol vast areas to protect from attackers (e.g., poachers or illegal loggers). Defenders must choose how much time to spend in each region of the protected area, balancing exploration of infrequently visited regions and exploitation of known hotspots. We formulate the problem as a stochastic multi-armed bandit, where each action represents a patrol strategy, enabling us to guarantee the rate of convergence of the patrolling policy. However, a naive bandit approach would compromise short-term performance for long-term optimality, resulting in animals poached and forests destroyed. To speed up performance, we leverage smoothness in the reward function and decomposability of actions. We show a synergy between Lipschitzcontinuity and decomposition as each aids the convergence of the other. In doing so, we bridge the gap between combinatorial and Lipschitz bandits, presenting a no-regret approach that tightens existing guarantees while optimizing for short-term performance. We demonstrate that our algorithm, LIZARD, improves performance on real-world poaching data from Cambodia.
aaai21_dual_mandate_patrols.pdf
Prins A, Mate A, Killian JA, Abebe R, Tambe M. Incorporating Healthcare Motivated Constraints in Restless Bandit Based Resource Allocation. NeurIPS 2020 Workshops: Challenges of Real World Reinforcement Learning, Machine Learning in Public Health (Best Lightning Paper), Machine Learning for Health (Best on Theme), Machine Learning for the Developing World. 2020. human_in_the_loop_rmab_short.pdf
Mate A*, Killian J*, Xu H, Perrault A, Tambe M. Collapsing Bandits and their Application to Public Health Interventions. Advances in Neural and Information Processing Systems (NeurIPS) . 2020. Publisher's VersionAbstract
We propose and study Collapsing Bandits, a new restless multi-armed bandit (RMAB) setting in which each arm follows a binary-state Markovian process with a special structure: when an arm is played, the state is fully observed, thus “collapsing” any uncertainty, but when an arm is passive, no observation is made, thus allowing uncertainty to evolve. The goal is to keep as many arms in the “good” state as possible by planning a limited budget of actions per round. Such Collapsing Bandits are natural models for many healthcare domains in which health workers must simultaneously monitor patients and deliver interventions in a way that maximizes the health of their patient cohort. Our main contributions are as follows: (i) Building on the Whittle index technique for RMABs, we derive conditions under which the Collapsing Bandits problem is indexable. Our derivation hinges on novel conditions that characterize when the optimal policies may take the form of either “forward” or “reverse” threshold policies. (ii) We exploit the optimality of threshold policies to build fast algorithms for computing the Whittle index, including a closed form. (iii) We evaluate our algorithm on several data distributions including data from a real-world healthcare task in which a worker must monitor and deliver interventions to maximize their patients’ adherence to tuberculosis medication. Our algorithm achieves a 3-order-of-magnitude speedup compared to state-of-the-art RMAB techniques, while achieving similar performance.
collapsing_bandits_full_paper_camready.pdf
Wang K, Wilder B, Perrault A, Tambe M. Automatically Learning Compact Quality-aware Surrogates for Optimization Problems, in NeurIPS (Spotlight). Vancouver, Canada ; 2020.Abstract
Solving optimization problems with unknown parameters often requires learning a predictive model to predict the values of the unknown parameters and then solvingthe problem using these values. Recent work has shown that including the optimization problem as a layer in the model training pipeline results in predictions of the unobserved parameters that lead to higher decision quality. Unfortunately, this process comes at a large computational cost because the optimization problem must be solved and differentiated through in each training iteration; furthermore, it may also sometimes fail to improve solution quality due to non-smoothness issues that arise when training through a complex optimization layer. To address these shortcomings, we learn a low-dimensional surrogate model of a large optimization problem by representing the feasible space in terms of meta-variables, each of which is a linear combination of the original variables. By training a low-dimensional surrogate model end-to-end, and jointly with the predictive model, we achieve: i) a large reduction in training and inference time; and ii) improved per-formance by focusing attention on the more important variables in the optimization and learning in a smoother space. Empirically, we demonstrate these improvements on a non-convex adversary modeling task, a submodular recommendation task and a convex portfolio optimization task.
automatically-learning-compact-quality-aware-surrogates-for-optimization-problems-paper.pdf
Perrault A, Fang F, Sinha A, Tambe M. AI for Social Impact: Learning and Planning in the Data-to-Deployment Pipeline. AI Magazine. 2020.Abstract
With the maturing of AI and multiagent systems research, we have a tremendous
opportunity to direct these advances towards addressing complex societal problems. In pursuit of this goal of AI for Social Impact, we as AI researchers must go beyond improvements in computational methodology; it is important to step out in the field to demonstrate social impact. To this end, we focus on the problems of public safety and security, wildlife conservation, and public health in low-resource communities, and present research advances in multiagent systems to address one key cross-cutting challenge: how to effectively deploy our limited intervention resources in these problem domains. We present case studies from our deployments around the world as well as lessons learned that we hope are of use to researchers who are interested in AI for Social Impact. In pushing this research agenda, we believe AI can indeed play an important
role in fighting social injustice and improving society.
2001.00088.pdf
Sharma A, Killian J, Perrault A. Optimization of the Low-Carbon Energy Transition Under Static and Adaptive Carbon Taxes via Markov Decision Processes, in AI for Social Good Workshop. ; 2020.Abstract

Many economists argue that a national carbon tax would be the most effective policy for incentivizing the development of low-carbon energy technologies. Yet existing models that measure the effects of a carbon tax only consider carbon taxes with fixed schedules. We propose a simple energy system transition model based on a finite-horizon Markov Decision Process (MDP) and use it to compare the carbon emissions reductions achieved by static versus adaptive carbon taxes. We find that in most cases, adaptive taxes achieve equivalent if not lower emissions trajectories while reducing the cost burden imposed by the carbon tax. However, the MDP optimization in our model adapted optimal policies to take advantage of the expected carbon tax adjustment, which sometimes resulted in the simulation missing its emissions targets.

Back to AI for Social Good event

Optimization of the Low-Carbon Energy Transition Under Static and Adaptive Carbon Taxes via Markov Decision Processes
Jonnerby J, Lazos P, Lock E, Marmolejo-Cossío F, Ramsey CB, Sridhar D. Test and Contain: A Resource-Optimal Testing Strategy for COVID-19, in AI for Social Good Workshop. ; 2020.Abstract

We propose a novel testing and containment strategy to limit the spread of SARS-CoV2 while minimising the impact on the social and economic fabric of countries struggling with the pandemic. Our approach recognises the fact that testing capacities in many low and middle-income countries (LMICs) are severely constrained. In this setting, we show that the best way to utilise a limited number of tests during a pandemic can be found by solving an allocation problem. Our problem formulation takes into account the heterogeneity of the population and uses pooled testing to identify and isolate individuals while prioritising key workers and individuals with a higher risk of spreading the disease. In order to demonstrate the efficacy of our strategy, we perform simulations using a network-based SIR model. Our simulations indicate that applying our mechanism to a population of 10, 000 individuals with only 1 test per day reduces the peak number of infected individuals by approximately 27%, when compared to the scenario where no intervention is implemented, and requires at most 2% of the population to self-isolate at any given point.

Back to AI for Social Good event

Test and Contain: A Resource-Optimal Testing Strategy for COVID-19
Leavy S, O’Sullivan B, Siapera E. Data, Power and Bias in Artificial Intelligence, in AI for Social Good Workshop. ; 2020.Abstract

Artificial Intelligence has the potential to exacerbate societal bias and set back decades of advances in equal rights and civil liberty. Data used to train machine learning algorithms may capture social injustices, inequality or discriminatory attitudes that may be learned and perpetuated in society. Attempts to address this issue are rapidly emerging from different perspectives involving technical solutions, social justice and data governance measures. While each of these approaches are essential to the development of a comprehensive solution, often discourse associated with each seems disparate. This paper reviews ongoing work to ensure data justice, fairness and bias mitigation in AI systems from different domains exploring the interrelated dynamics of each and examining whether the inevitability of bias in AI training data may in fact be used for social good. We highlight the complexity associated with defining policies for dealing with bias. We also consider technical challenges in addressing issues of societal bias.

Back to AI for Social Good event

Data, Power and Bias in Artificial Intelligence

Pages