Publications by Type: Conference Paper

2021
Chen X, Liu Z. The Fairness of Leximin in Allocation of Indivisible Chores, in ; 2021.Abstract
The leximin solution — which selects an allocation that maximizes the minimum utility, then the second minimum utility, and so forth — is known to provide EFX (envy-free up to any good) fairness guarantee in some contexts when allocating indivisible goods. However, it remains unknown how fair the leximin solution is when used to allocate in- divisible chores. In this paper, we demonstrate that the leximin solution can be modified to also provide compelling fairness guarantees for the allocation of indivisible chores. First, we generalize the definition of the leximin solution. Then, we show that the leximin solution finds a PROP1 (proportional up to one good) and PO (Pareto-optimal) allocation for 3 or 4 agents in the context of chores allocation with additive distinct valuations. Additionally, we prove that the leximin solution is EFX for combinations of goods and chores for agents with general but identical valuations.
The Fairness of Leximin in Allocation of Indivisible Chores
Suriyakumar VM, Papernot N, Goldenberg A, Ghassemi M. Challenges of Differentially Private Prediction in Healthcare Settings, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
Privacy-preserving machine learning is becoming increasingly important as models are being used on sensitive data such as electronic health records. Differential privacy is considered the gold standard framework for achieving strong privacy guarantees in machine learning. Yet, the performance implications of learning with differential privacy have not been characterized in the presence of time-varying hospital policies, care practices, and known class imbalance present in health data. First, we demon- strate that due to the long-tailed nature of health- care data, learning with differential privacy results in poor utility tradeoffs. Second, we demonstrate through an application of influence functions that learning with differential privacy leads to disproportionate influence from the majority group on model predictions which results in negative consequences for utility and fairness. Our results high- light important implications of differentially private learning; which focuses by design on learn- ing the body of a distribution to protect privacy but omits important information contained in the tails of healthcare data distributions.
Challenges of Differentially Private Prediction in Healthcare Settings
Zhu Z, Nair V, Olmschenk G, Seiple WH. ASSIST: Assistive Sensor Solutions for Independent and Safe Travel of Blind and Visually Impaired People, in IJCAI 2021 Workshop on AI for Social Good. ; 2021.Abstract
This paper describes the interface and testing of an indoor navigation app - ASSIST - that guides blind & visually impaired (BVI) individuals through an indoor environment with high accuracy while augmenting their understanding of the surrounding environment. ASSIST features personalized inter- faces by considering the unique experiences that BVI individuals have in indoor wayfinding and of- fers multiple levels of multimodal feedback. After an overview of the technical approach and imple- mentation of the first prototype of the ASSIST system, the results of two pilot studies performed with BVI individuals are presented. Our studies show that ASSIST is useful in providing users with navigational guidance, improving their efficiency and (more significantly) their safety and accuracy in wayfinding indoors.
ASSIST: Assistive Sensor Solutions for Independent and Safe Travel of Blind and Visually Impaired People
2020
Wang K, Wilder B, Perrault A, Tambe M. Automatically Learning Compact Quality-aware Surrogates for Optimization Problems, in NeurIPS (Spotlight). Vancouver, Canada ; 2020.Abstract
Solving optimization problems with unknown parameters often requires learning a predictive model to predict the values of the unknown parameters and then solvingthe problem using these values. Recent work has shown that including the optimization problem as a layer in the model training pipeline results in predictions of the unobserved parameters that lead to higher decision quality. Unfortunately, this process comes at a large computational cost because the optimization problem must be solved and differentiated through in each training iteration; furthermore, it may also sometimes fail to improve solution quality due to non-smoothness issues that arise when training through a complex optimization layer. To address these shortcomings, we learn a low-dimensional surrogate model of a large optimization problem by representing the feasible space in terms of meta-variables, each of which is a linear combination of the original variables. By training a low-dimensional surrogate model end-to-end, and jointly with the predictive model, we achieve: i) a large reduction in training and inference time; and ii) improved per-formance by focusing attention on the more important variables in the optimization and learning in a smoother space. Empirically, we demonstrate these improvements on a non-convex adversary modeling task, a submodular recommendation task and a convex portfolio optimization task.
automatically-learning-compact-quality-aware-surrogates-for-optimization-problems-paper.pdf
Sharma A, Killian J, Perrault A. Optimization of the Low-Carbon Energy Transition Under Static and Adaptive Carbon Taxes via Markov Decision Processes, in AI for Social Good Workshop. ; 2020.Abstract

Many economists argue that a national carbon tax would be the most effective policy for incentivizing the development of low-carbon energy technologies. Yet existing models that measure the effects of a carbon tax only consider carbon taxes with fixed schedules. We propose a simple energy system transition model based on a finite-horizon Markov Decision Process (MDP) and use it to compare the carbon emissions reductions achieved by static versus adaptive carbon taxes. We find that in most cases, adaptive taxes achieve equivalent if not lower emissions trajectories while reducing the cost burden imposed by the carbon tax. However, the MDP optimization in our model adapted optimal policies to take advantage of the expected carbon tax adjustment, which sometimes resulted in the simulation missing its emissions targets.

Back to AI for Social Good event

Optimization of the Low-Carbon Energy Transition Under Static and Adaptive Carbon Taxes via Markov Decision Processes
Jonnerby J, Lazos P, Lock E, Marmolejo-Cossío F, Ramsey CB, Sridhar D. Test and Contain: A Resource-Optimal Testing Strategy for COVID-19, in AI for Social Good Workshop. ; 2020.Abstract

We propose a novel testing and containment strategy to limit the spread of SARS-CoV2 while minimising the impact on the social and economic fabric of countries struggling with the pandemic. Our approach recognises the fact that testing capacities in many low and middle-income countries (LMICs) are severely constrained. In this setting, we show that the best way to utilise a limited number of tests during a pandemic can be found by solving an allocation problem. Our problem formulation takes into account the heterogeneity of the population and uses pooled testing to identify and isolate individuals while prioritising key workers and individuals with a higher risk of spreading the disease. In order to demonstrate the efficacy of our strategy, we perform simulations using a network-based SIR model. Our simulations indicate that applying our mechanism to a population of 10, 000 individuals with only 1 test per day reduces the peak number of infected individuals by approximately 27%, when compared to the scenario where no intervention is implemented, and requires at most 2% of the population to self-isolate at any given point.

Back to AI for Social Good event

Test and Contain: A Resource-Optimal Testing Strategy for COVID-19
Leavy S, O’Sullivan B, Siapera E. Data, Power and Bias in Artificial Intelligence, in AI for Social Good Workshop. ; 2020.Abstract

Artificial Intelligence has the potential to exacerbate societal bias and set back decades of advances in equal rights and civil liberty. Data used to train machine learning algorithms may capture social injustices, inequality or discriminatory attitudes that may be learned and perpetuated in society. Attempts to address this issue are rapidly emerging from different perspectives involving technical solutions, social justice and data governance measures. While each of these approaches are essential to the development of a comprehensive solution, often discourse associated with each seems disparate. This paper reviews ongoing work to ensure data justice, fairness and bias mitigation in AI systems from different domains exploring the interrelated dynamics of each and examining whether the inevitability of bias in AI training data may in fact be used for social good. We highlight the complexity associated with defining policies for dealing with bias. We also consider technical challenges in addressing issues of societal bias.

Back to AI for Social Good event

Data, Power and Bias in Artificial Intelligence
Nishtala S, Kamarthi H, Thakkar D, Narayanan D, Grama A, Hegde A, Padmanabhan R, Madhiwalla N, Chaudhary S, Ravindran B, et al. Missed calls, Automated Calls and Health Support: Using AI to improve maternal health outcomes by increasing program engagement, in AI for Social Good Workshop. ; 2020.Abstract

India accounts for 11% of maternal deaths globally where a woman dies in childbirth every fifteen minutes. Lack of access to preventive care information is a significant problem contributing to high maternal morbidity and mortality numbers, especially in low-income households. We work with ARMMAN, a non-profit based in India, to further the use of call-based information programs by earlyon identifying women who might not engage on these programs that are proven to affect health parameters positively. We analyzed anonymized callrecords of over 300,000 women registered in an awareness program created by ARMMAN that uses cellphone calls to regularly disseminate health related information. We built robust deep learning based models to predict short term and long term dropout risk from call logs and beneficiaries’ demographic information. Our model performs 13% better than competitive baselines for short-term forecasting and 7% better for long term forecasting. We also discuss the applicability of this method in the real world through a pilot validation that uses our method to perform targeted interventions.

Back to AI for Social Good event

Missed calls, Automated Calls and Health Support: Using AI to improve maternal health outcomes by increasing program engagement
Khudabukhsh A, Palakodety S, Carbonell J. On NLP Methods Robust to Noisy Indian Social Media Data , in AI for Social Good Workshop. ; 2020.Abstract

Much of the computational social science research focusing on issues faced in developing nations concentrates on web content written in a world language often ignoring a significant chunk of a corpus written in a poorly resourced yet highly prevalent first language of the region in concern. Such omissions are common and convenient due to the sheer mismatch between linguistic resources offered in a world language and its low-resource counterpart. However, the path to analyze English content generated in linguistically diverse regions, such as the Indian subcontinent, is not straight-forward either. Social science/AI for social good research focusing on Indian sub-continental issues faces two major Natural Language Processing (NLP) challenges: (1) how to extract a (reasonably clean) monolingual English corpus? (2) How to extend resources and analyses to its low-resource counterpart? In this paper1 , we share NLP methods, lessons learnt from our multiple projects, and outline future focus areas that could be useful in tackling these two challenges. The discussed results are critical to two important domains: (1) detecting peace-seeking, hostility-diffusing hope speech in the context of the 2019 India-Pakistan conflict (2) detecting user generated web-content encouraging COVID-19 health compliance.

Back to AI for Social Good event

On NLP Methods Robust to Noisy Indian Social Media Data
Brubach B, Srinivasan A, Zhao S. The Relationship between Gerrymandering Classification and Voter Incentives, in AI for Social Good Workshop. ; 2020.Abstract

Gerrymandering is the process of drawing electoral district maps in order to manipulate the outcomes of elections. Increasingly, computers are involved in both drawing biased districts and attempts to measure and regulate this practice. The most highprofile proposals to measure partisan gerrymandering use past voting data to classify a map as gerrymandered (or not). Prior work studies the ability of these metrics to detect gerrymandering, but does not explore how the metrics could affect voter behavior or be circumvented via strategic voting. We show that using past voting data for this classification can affect strategyproofness by introducing a game which models the iterative sequence of voting and redrawing districts under regulation that bans outlier maps. In experiments, we show that a heuristic can find strategies for this game including on real North Carolin maps and voting data. Finally, we address questions from a recent US Supreme Court case that relate to our model. This is a summary of “Meddling Metrics: the Effects of Measuring and Constraining Partisan Gerrymandering on Voter Incentives” appearing in EC2020

Back to AI for Social Good event

The Relationship between Gerrymandering Classification and Voter Incentives
Bayramli I, Bondi E, Tambe M. In the Shadow of Disaster: Finding Shadows to Improve Damage Detection, in AI for Social Good Workshop. ; 2020.Abstract

Rapid damage assessment after natural disasters is crucial for effective planning of relief efforts. Satellites with Very High Resolution (VHR) sensors can provide a detailed aerial image of the affected area, but current damage detection systems are fully- or semi-manual which can delay the delivery of emergency care. In this paper, we apply recent advancements in segmentation and change detection to detect damage given pre- and post-disaster VHR images of an affected area. Moreover, we demonstrate that segmentation models trained for this task rely on shadows by showing that (i) shadows influence false positive detections by the model, and (ii) removing shadows leads to poorer performance. Through this analysis, we aim to inspire future work to improve damage detection.

Back to AI for Social Good event

In the Shadow of Disaster: Finding Shadows to Improve Damage Detection
Zhang A, Perrault A. Influence Maximization and Equilibrium Strategies in Election Network Games, in AI for Social Good Workshop. ; 2020.Abstract

Social media has become an increasingly important political domain in recent years, especially for campaign advertising. In this work, we develop a linear model of advertising influence maximization in two-candidate elections from the viewpoint of a fully-informed social network platform, using several variations on classical DeGroot dynamics to model different features of electoral opinion formation. We consider two types of candidate objectives—margin of victory (maximizing total votes earned) and probability of victory (maximizing probability of earning the majority)—and show key theoretical differences in the corresponding games, including advertising strategies for arbitrarily large networks and the existence of pure Nash equilibria. Finally, we contribute efficient algorithms for computing mixed equilibria in the margin of victory case as well as influence-maximizing best-response algorithms in both cases and show that in practice, as implemented on the Adolescent Health Dataset, they contribute to campaign equality by minimizing the advantage of the higherspending candidate.

Back to AI for Social Good event

Influence Maximization and Equilibrium Strategies in Election Network Games
Nwankwo E, Okolo C, Habonimana C. Topic Modeling Approaches for Understanding COVID-19 MisinformationSpread in Sub-Saharan Africa, in AI for Social Good Workshop. ; 2020.Abstract

Since the start of the pandemic, the proliferation of fake news and misinformation has been a constant battle for health officials and policy makers as they work to curb the spread of COVID-19. In areas within the Global South, it can be difficult for officials to keep track of the growth of such false information and even harder to address the real concerns their communities have. In this paper, we present some techniques the AI community can offer to help address this issue. While the topics presented within this paper are not a complete solution, we believe they could complement the work government officials, healthcare workers, and NGOs are currently doing on the ground in Sub-Saharan Africa.

Back to AI for Social Good event

Topic Modeling Approaches for Understanding COVID-19 MisinformationSpread in Sub-Saharan Africa
Wilder B, Charpingon M, Killian J, Ou H-C, Mate A, Jabbari S, Perrault A, Angel Desai M. Inferring between-population differences in COVID-19 dynamics, in AI for Social Good Workshop. ; 2020.Abstract

As the COVID-19 pandemic continues, formulating targeted policy interventions supported by differential SARS-CoV2 transmission dynamics will be of vital importance to national and regional governments. We develop an individual-level model for SARS-CoV2 transmission that accounts for location-dependent distributions of age, household structure, and comorbidities. We use these distributions together with age-stratified contact matrices to instantiate specific models for Hubei, China; Lombardy, Italy; and New York, United States. We then develop a Bayesian inference framework which leverages data on reported deaths to obtain a posterior distribution over unknown parameters and infer differences in the progression of the epidemic in the three locations. These findings highlight the role of between-population variation in formulating policy interventions.

Back to AI for Social Good event

Inferring between-population differences in COVID-19 dynamics
Xu L, Perrault A, Plumptre A, Driciru M, Wanyama F, Rwetsiba A, Tambe M. Game Theory on the Ground: The Effect of Increased Patrols on Deterring Poachers, in AI for Social Good Workshop. ; 2020.Abstract

Applications of artificial intelligence for wildlife protection have focused on learning models of poacher behavior based on historical patterns. However, poachers’ behaviors are described not only by their historical preferences, but also their reaction to ranger patrols. Past work applying machine learning and game theory to combat poaching have hypothesized that ranger patrols deter poachers, but have been unable to find evidence to identify how or even if deterrence occurs. Here for the first time, we demonstrate a measurable deterrence effect on real-world poaching data. We show that increased patrols in one region deter poaching in the next timestep, but poachers then move to neighboring regions. Our findings offer guidance on how adversaries should be modeled in realistic gametheoretic settings.

Back to AI for Social Good event

Game Theory on the Ground: The Effect of Increased Patrols on Deterring Poachers
Zhou Y, Kantarcioglu M. On Transparency of Machine Learning Models: A Position Paper, in AI for Social Good Workshop. ; 2020.Abstract

An ongoing challenge in machine learning is to improve the transparency of learning models, helping end users to build trust and defend fairness and equality while protecting individual privacy and information assets. Transparency is a timely topic given the increasing application of machine learning techniques in the real world, and yet much more progress is needed in addressing the transparency issues. We propose critical research questions on transparency-aware machine learning on two fronts: know how and know that. Know-how is concerned with searching for a set of decision objects (e.g. functions, rules, lists, and graphs) that are cognitively fluent for humans to apply and consistent with the original complex model, while know-that is concerned with gaining more in-depth understanding of the internal justification of the decisions through external constraints on accuracy, consistency, privacy, reliability, and fairness.

Back to AI for Social Good event

On Transparency of Machine Learning Models: A Position Paper
Johnston C, Blessenohl S, Vayanos P. Preference Elicitation and Aggregation to Aid with Patient Triage during the COVID-19 Pandemic, in AI for Social Good Workshop. ; 2020.Abstract

During the COVID-19 pandemic, committees have been appointed to make ethically difficult triage decisions, which are complicated by the diversity of stakeholder interests involved. We propose a disciplined, automated approach to support such difficult collective decision-making. Our system aims to recommend a policy to the group that strikes a compromise between potentially conflicting individual preferences. To identify a policy that best aggregates individual preferences, our system first elicits individual stakeholder value judgements by asking a moderate number of strategically selected queries, each taking the form of a pairwise comparison posed to a specific stakeholder. We propose a novel formulation of this problem that selects which queries to ask which individuals to best inform the downstream recommendation problem. Modeling this as a multi-stage robust optimization problem, we show that we can equivalently reformulate this as a mixed-integer linear program which can be solved with off-the-shelf solvers. We evaluate the performance of our approach on the problem of recommending policies for allocating critical care beds to patients with COVID-19. We show that asking questions intelligently allows the system to recommend a policy with a much lower regret than asking questions randomly. The lower regret suggests that the system is suited to help a committee reach a better decision by suggesting a policy that aligns with stakeholder value judgments.

 

 

Back to AI for Social Good event

Preference Elicitation and Aggregation to Aid with Patient Triage during the COVID-19 Pandemic
Derval G, François-Lavet V, Schaus P. Nowcasting COVID-19 hospitalizations using Google Trends and LSTM, in AI for Social Good Workshop. ; 2020.Abstract

The Google Trends data of some keywords have strong correlations with COVID-19 hospitalizations. We attempt to use these correlations and show an experimental procedure using a simple LSTM model to nowcast hospitalization peaks using Google Trends data. Experiments are done on French regions and on Belgium. This is a preliminary work, that would need to be tested during a (hopefully non-existing) second peak.

Back to AI for Social Good event

Nowcasting COVID-19 hospitalizations using Google Trends and LSTM
Padhee S, Saha TK, Tetreault J, Jaimes A. Clustering of Social Media Messages for Humanitarian Aid Response during Crisis, in AI for Social Good Workshop. ; 2020.Abstract

Social media has quickly grown into an essential tool for people to communicate and express their needs during crisis events. Prior work in analyzing social media data for crisis management has focused primarily on automatically identifying actionable (or, informative) crisis-related messages. In this work, we show that recent advances in Deep Learning and Natural Language Processing outperform prior approaches for the task of classifying informativeness and encourage the field to adopt them for their research or even deployment. We also extend these methods to two sub-tasks of informativeness and find that the Deep Learning methods are effective here as well.

Back to AI for Social Good event

Clustering of Social Media Messages for Humanitarian Aid Response during Crisis
Yan C, Xu H, Vorobeychik Y, Li B, Fabbri D, Malin BA. To Warn or Not to Warn: Online Signaling in Audit Games, in AI for Social Good Workshop. ; 2020.Abstract

In health care organizations, a patient’s privacy is threatened by the misuse of their electronic health record (EHR). To monitor privacy intrusions, logging systems are often deployed to trigger alerts whenever a suspicious access is detected. However, such mechanisms are insufficient in the face of small budgets, strategic attackers, and large false positive rates. In an attempt to resolve these problems, EHR systems are increasingly incorporating signaling, so that whenever a suspicious access request occurs, the system can, in real time, warn the user that the access may be audited. This gives rise to an online problem in which one needs to determine 1) whether a warning should be triggered and 2) the likelihood that the data request will be audited later. In this paper, we formalize this auditing problem as a Signaling Audit Game (SAG). A series of experiments with 10 million real access events (containing over 26K alerts) from Vanderbilt University Medical Center (VUMC) demonstrate that a strategic presentation of warnings adds value in that SAGs realize significantly higher utility for the auditor than systems without signaling.

Back to AI for Social Good event

To Warn or Not to Warn: Online Signaling in Audit Games

Pages