We present the design and analysis of a multi-level game-theoretic model of hierarchical policy-making, inspired by policy responses to the COVID-19 pandemic. Our model captures the potentially mismatched priorities among a hierarchy of policy-makers (e.g., federal, state, and local governments) with respect to two main cost components that have opposite dependence on the policy strength, such as post-intervention infection rates and the cost of policy implementation. Our model further includes a crucial third fac- tor in decisions: a cost of non-compliance with the policy-maker immediately above in the hierarchy, such as non-compliance of state with federal policies. Our first contribution is a closed-form approximation of a recently published agent-based model to com- pute the number of infections for any implemented policy. Second, we present a novel equilibrium selection criterion that addresses common issues with equilibrium multiplicity in our setting. Third, we propose a hierarchical algorithm based on best response dynamics for computing an approximate equilibrium of the hierarchical policy-making game consistent with our solution concept. Finally, we present an empirical investigation of equilibrium policy strategies in this game as a function of game parameters, such as the degree of centralization and disagreements about policy priorities among the agents, the extent of free riding as well as fairness in the distribution of costs.
The disproportionate burden of obesity among low-socioeconomic status (SES), Black, and Latinx households underscores the ever-increasing health disparities in the United States. This epidemic can be prevented by prioritizing physical activity interventions for these populations. However, many technology-based physical activity interventions were designed for the individuals rather than for the individuals amidst their social environment.
In this position paper, I report my eight-year research and technology designs processes. Although this research was not specifically guided by asset-based design principles, assets consistently emerged during the in-depth fieldwork. They include family relationships and caregiving communities. These assets appeared to influence the motivation to engage with health technologies and also enhance family physical activity self-efficacy. However, since my studies were not guided by asset-based design principles, some key assets may not be sufficiently identified. By participating in this workshop, I seek to collaboratively explore how to support communities to identify their assets and also how to translate assets into technology designs towards enhancing health equity.
Physical activity (PA) is crucial for reducing the risk of obesity, an epidemic that disproportionately burdens families of low-socioeconomic status (SES). While fitness tracking tools can increase PA awareness, more work is needed to examine (1) how such tools can help people benefit from their social environment, and (2) how reflections can help enhance PA attitudes. We investigated how fitness tracking tools for families can support social modeling and self-modeling (through reflection), two critical processes in Social Cognitive Theory. We developed StoryMap, a novel fitness tracking app for families aimed at supporting both modes of modeling. Then, we conducted a five-week qualitative study evaluating StoryMap with 16 low-SES families. Our findings contribute an understanding of how social and self-modeling can be implemented in fitness tracking tools and how both modes of modeling can enhance key PA attitudes: self-efficacy and outcome expectations. Finally, we propose design recommendations for social personal informatics tools.
Clinical machine learning models experience significantly degraded performance in datasets not seen during training, e.g., new hospitals or populations. Recent developments in domain generalization offer a promising solution to this problem by creating models that learn invariances across environments. In this work, we benchmark the performance of eight domain generalization methods on multi-site clinical time series and medical imaging data. We introduce a framework to induce synthetic but realistic domain shifts and sampling bias to stress-test these methods over existing nonhealthcare benchmarks. We find that current domain generalization methods do not achieve significant gains in out-of-distribution performance over empirical risk minimization on real-world medical imaging data, in line with prior work on general imaging datasets. However, a subset of realistic induced-shift scenarios in clinical time series data exhibit limited performance gains. We characterize these scenarios in detail, and recommend best practices for domain generalization in the clinical setting.
Active screening is a common approach in controlling the spread of recurring infectious diseases such as tuberculosis and influenza. In this approach, health workers periodically select a subset of population for screening. However, given the limited number of health workers, only a small subset of the population can be visited in any given time period. Given the recurrent nature of the disease and rapid spreading, the goal is to minimize the number of infections over a long time horizon. Active screening can be formalized as a sequential combinatorial optimization over the network of people and their connections. The main computational challenges in this formalization arise from i) the combinatorial nature of the problem, ii) the need of sequential planning and iii) the uncertainties in the infectiousness states of the population.
We present a new algorithm and theoretical results for solving Multi-action Multi-armed Restless Bandits, an important but insufficiently studied generalization of traditional Multi-armed Restless Bandits (MARBs). Multi-action MARBs are capable of handling critical problem complexities often present in AI4SG domains like anti-poaching and healthcare, but that traditional MARBs fail to capture. Limited previous work on Multi-action MARBs has only been specialized to sub-problems. Here we derive BLam, an algorithm for general Multi-action MARBs using Lagrangian relaxation techniques and convexity to quickly converge to good policies via bound optimization. We also provide experimental results comparing BLam to baselines on a simulated distributions motivated by a real-world community health intervention task, achieving up to five-times speedups over more general methods without sacrificing performance.
Attribution of natural disasters/collective misfortunes is a widely studied social science problem. At present, most such studies rely on surveys or external signals such as voting outcomes. Typically, these surveys are costly to conduct and often have considerable turnaround time. In contrast, procuring social media data is vastly cheaper and can be obtained at varying spatiotemporal granularity. In this paper, we describe our recent work1 that looked into the viability of estimating attributions through social media discussions. To this end, (1) we fo- cus on the 2019 Chennai water crisis, a major in- stance of recent environmental resource crisis; (2) construct a substantial corpus of 72,098 YouTube comments posted by 43,859 users on 623 videos relevant to the crisis; (3) define a novel natural language processing task of attribution tie detection; and (4) design a neural classifier that achieves a reasonable performance. We also release the first data set on this novel task and important domain.
Controlling bias in training datasets is vital for ensuring equal treatment, or parity, between different groups in downstream applications. A naive solution is to transform the data so that it is statistically independent of group membership, but this may throw away too much information when a reasonable compromise between fairness and accuracy is desired. Another common approach is to limit the ability of a particular adversary who seeks to maximize parity. Unfortunately, representations produced by adversarial approaches may still retain biases as their efficacy is tied to the complexity of the adversary used during training. To this end, we theoretically establish that by limiting the mutual information between representations and protected attributes, we can assuredly control the parity of any downstream classifier. We demonstrate an effective method for controlling parity through mutual information based on contrastive information estimators and show that it outperforms other exist- ing approaches. We test our approach on UCI Adult and Heritage Health datasets and show that our approach provides more informative representations across a range of desired parity thresholds while providing strong theoretical guarantees on the parity of any downstream algorithm.
Road networks are among the most essential components of a country’s infrastructure. By facilitating the movement and exchange of goods, people, and ideas, they support economic and cultural activity both within and across borders. Up- to-date mapping of the the geographical distribution of roads and their quality is essential in high- impact applications ranging from land use planning to wilderness conservation. Mapping presents a particularly pressing challenge in developing countries, where documentation is poor and disproportionate amounts of road construction are expected to occur in the coming decades. We present a new crowd-sourced approach capable of assessing road quality and identify key challenges and opportunities in the transferability of deep learning based methods across domains.
Acoustic analysis is becoming a key element of environmental monitoring for wildlife conservation. Passive acoustic recorders can document a variety of vocal animals over large areas and long time horizons, paving the path for machine learning algorithms to identify individual species, estimate abundance, and evaluate ecosystem health. How- ever, such techniques rely on finding meaningful characterizations of calls and soundscapes, capable of capturing complex spatiotemporal, taxonomic, and behavioral structure. This article reviews existing methods for computing informative lower- dimensional features in the context of terrestrial passive acoustic monitoring, and discusses directions for further work.
The presence of “big data” in higher education has led to the increasing popularity of predictive analytics for guiding various stakeholders on appropriate actions to support student success. In develop- ing such applications, model selection is a central issue. As such, this study presents a comprehensive examination of five commonly used machine learning models in student success prediction. Us- ing administrative and learning management system (LMS) data for nearly 2,000 college students at a public university, we employ the models to predict short-term and long-term academic success. Beyond the tradeoff between model interpretability and accuracy, we also focus on the fairness of these models with regard to different student populations. Our findings suggest that more interpretable mod- els such as logistic regression do not necessarily compromise predictive accuracy. Also, they lead to no more, if not less, prediction bias against disadvantaged student groups than complicated models. Moreover, prediction biases against certain groups persist even in the fairest model. These results thus recommend using simpler algorithms in conjunction with human evaluation in instructional and institutional applications of student success prediction when valid student features are in place.