The disproportionate burden of obesity among low-socioeconomic status (SES), Black, and Latinx households underscores the ever-increasing health disparities in the United States. This epidemic can be prevented by prioritizing physical activity interventions for these populations. However, many technology-based physical activity interventions were designed for the individuals rather than for the individuals amidst their social environment.
In this position paper, I report my eight-year research and technology designs processes. Although this research was not specifically guided by asset-based design principles, assets consistently emerged during the in-depth fieldwork. They include family relationships and caregiving communities. These assets appeared to influence the motivation to engage with health technologies and also enhance family physical activity self-efficacy. However, since my studies were not guided by asset-based design principles, some key assets may not be sufficiently identified. By participating in this workshop, I seek to collaboratively explore how to support communities to identify their assets and also how to translate assets into technology designs towards enhancing health equity.
Physical activity (PA) is crucial for reducing the risk of obesity, an epidemic that disproportionately burdens families of low-socioeconomic status (SES). While fitness tracking tools can increase PA awareness, more work is needed to examine (1) how such tools can help people benefit from their social environment, and (2) how reflections can help enhance PA attitudes. We investigated how fitness tracking tools for families can support social modeling and self-modeling (through reflection), two critical processes in Social Cognitive Theory. We developed StoryMap, a novel fitness tracking app for families aimed at supporting both modes of modeling. Then, we conducted a five-week qualitative study evaluating StoryMap with 16 low-SES families. Our findings contribute an understanding of how social and self-modeling can be implemented in fitness tracking tools and how both modes of modeling can enhance key PA attitudes: self-efficacy and outcome expectations. Finally, we propose design recommendations for social personal informatics tools.
Clinical machine learning models experience significantly degraded performance in datasets not seen during training, e.g., new hospitals or populations. Recent developments in domain generalization offer a promising solution to this problem by creating models that learn invariances across environments. In this work, we benchmark the performance of eight domain generalization methods on multi-site clinical time series and medical imaging data. We introduce a framework to induce synthetic but realistic domain shifts and sampling bias to stress-test these methods over existing nonhealthcare benchmarks. We find that current domain generalization methods do not achieve significant gains in out-of-distribution performance over empirical risk minimization on real-world medical imaging data, in line with prior work on general imaging datasets. However, a subset of realistic induced-shift scenarios in clinical time series data exhibit limited performance gains. We characterize these scenarios in detail, and recommend best practices for domain generalization in the clinical setting.
Attribution of natural disasters/collective misfortunes is a widely studied social science problem. At present, most such studies rely on surveys or external signals such as voting outcomes. Typically, these surveys are costly to conduct and often have considerable turnaround time. In contrast, procuring social media data is vastly cheaper and can be obtained at varying spatiotemporal granularity. In this paper, we describe our recent work1 that looked into the viability of estimating attributions through social media discussions. To this end, (1) we fo- cus on the 2019 Chennai water crisis, a major in- stance of recent environmental resource crisis; (2) construct a substantial corpus of 72,098 YouTube comments posted by 43,859 users on 623 videos relevant to the crisis; (3) define a novel natural language processing task of attribution tie detection; and (4) design a neural classifier that achieves a reasonable performance. We also release the first data set on this novel task and important domain.
Controlling bias in training datasets is vital for ensuring equal treatment, or parity, between different groups in downstream applications. A naive solution is to transform the data so that it is statistically independent of group membership, but this may throw away too much information when a reasonable compromise between fairness and accuracy is desired. Another common approach is to limit the ability of a particular adversary who seeks to maximize parity. Unfortunately, representations produced by adversarial approaches may still retain biases as their efficacy is tied to the complexity of the adversary used during training. To this end, we theoretically establish that by limiting the mutual information between representations and protected attributes, we can assuredly control the parity of any downstream classifier. We demonstrate an effective method for controlling parity through mutual information based on contrastive information estimators and show that it outperforms other exist- ing approaches. We test our approach on UCI Adult and Heritage Health datasets and show that our approach provides more informative representations across a range of desired parity thresholds while providing strong theoretical guarantees on the parity of any downstream algorithm.
Road networks are among the most essential components of a country’s infrastructure. By facilitating the movement and exchange of goods, people, and ideas, they support economic and cultural activity both within and across borders. Up- to-date mapping of the the geographical distribution of roads and their quality is essential in high- impact applications ranging from land use planning to wilderness conservation. Mapping presents a particularly pressing challenge in developing countries, where documentation is poor and disproportionate amounts of road construction are expected to occur in the coming decades. We present a new crowd-sourced approach capable of assessing road quality and identify key challenges and opportunities in the transferability of deep learning based methods across domains.
Acoustic analysis is becoming a key element of environmental monitoring for wildlife conservation. Passive acoustic recorders can document a variety of vocal animals over large areas and long time horizons, paving the path for machine learning algorithms to identify individual species, estimate abundance, and evaluate ecosystem health. How- ever, such techniques rely on finding meaningful characterizations of calls and soundscapes, capable of capturing complex spatiotemporal, taxonomic, and behavioral structure. This article reviews existing methods for computing informative lower- dimensional features in the context of terrestrial passive acoustic monitoring, and discusses directions for further work.
The presence of “big data” in higher education has led to the increasing popularity of predictive analytics for guiding various stakeholders on appropriate actions to support student success. In develop- ing such applications, model selection is a central issue. As such, this study presents a comprehensive examination of five commonly used machine learning models in student success prediction. Us- ing administrative and learning management system (LMS) data for nearly 2,000 college students at a public university, we employ the models to predict short-term and long-term academic success. Beyond the tradeoff between model interpretability and accuracy, we also focus on the fairness of these models with regard to different student populations. Our findings suggest that more interpretable mod- els such as logistic regression do not necessarily compromise predictive accuracy. Also, they lead to no more, if not less, prediction bias against disadvantaged student groups than complicated models. Moreover, prediction biases against certain groups persist even in the fairest model. These results thus recommend using simpler algorithms in conjunction with human evaluation in instructional and institutional applications of student success prediction when valid student features are in place.
Bollywood, aka the Mumbai film industry, is one of the biggest movie industries in the world. With a current movie market share of worth 2.1 billion dollars and a target audience base of 1.2 billion people, Bollywood is a formidable entertainment force. While the entertainment impact in terms of lives that Bollywood can potentially touch is mammoth, no NLP study on social biases in Bollywood con- tent exists. We thus seek to understand social biases in a developing country through the lens of popular movies. Our argument is simple – popular movie content reflects social norms and beliefs in some form or shape. We present our preliminary findings on a longitudinal corpus of English subtitles of popular Bollywood movies focusing on (1) social bias toward a fair skin color (2) gender biases, and (3) gender representation. We contrast our findings with a similar corpus of Hollywood movies.
Research has shown that suicide rate can increase by 13% when suicide is not reported responsibly. For example, irresponsible reporting includes specific details that depict suicide methods. To pro- mote more responsible journalism to save lives, we propose a novel problem called “suicide method detection”, which determines if a sentence in a news article contains a description of a suicide method. Our results show two promising approaches: a rule-based approach using category pattern matching and a BERT model with data augmentation, both of which reach over 0.9 in F- measure.
There is strong demand for more complete and better data around diversity in the screen industries. Focusing on on-screen diversity and representation in the UK, the evidence base around representation on-screen has been narrow so far. Diversity evaluation needs to consider more than on-screen presence – it should also consider prominence and portrayal. In this position paper, the ethics of applying computer vision to study on-screen characters is discussed via a conceptual framework of on- screen diversity metrics. Computer vision should be applied to identify character occurrences, rather than demographic classification. An illustrative ex- ample of measuring character prominence using a short video clip is shown. It concludes with four areas of applications where adopting computational methods can create a measurably more inclusive and representative broadcast landscape.
Sociotechnical systems within cities are now equipped with machine learning algorithms in hopes to increase efficiency and functionality by modeling and predicting trends. Machine learning algorithms have been applied in these domains to address challenges such as balancing the distribution of bikes throughout a city and identifying demand hotspots for ride sharing drivers. However, these algorithms applied to challenges in sociotechnical systems have exacerbated social inequalities due to previous bias in data sets or the lack of data from marginalized communities. In this paper, I will address how smart mobility initiatives in cities use machine learning algorithms to address challenges. I will also address how these algorithms unintentionally discriminate against features such as socioeconomic status to motivate the importance of algorithmic fairness. Using the bike sharing pro- gram in Pittsburgh, PA, I will present a position on how discrimination can be eliminated from the pipeline using Bayesian Optimization.
Nigerian English adaptation, Pidgin, has evolved over the years through multi-language code switch- ing, code mixing and linguistic adaptation. While Pidgin preserves many of the words in the normal English language corpus, both in spelling and pronunciation, the fundamental meaning of these words have changed significantly. For example, ‘ginger’ is not a plant but an expression of motivation and ’tank’ is not a container but an expression of gratitude. The implication is that the current approach of using direct English sentiment analysis of social media text from Nigeria is sub-optimal, as it will not be able to capture the semantic variation and contextual evolution in the contemporary meaning of these words. In practice, while many words in Nigerian Pidgin adaptation are the same as the standard English, the full English language based sentiment analysis models are not de- signed to capture the full intent of the Nigerian pid- gin when used alone or code-mixed. By augment- ing scarce human labelled code-changed text with ample synthetic code-reformatted text and meaning, we achieve significant improvements in sentiment scoring. Our research explores how to understand sentiment in an intrasentential code mixing and switching context where there has been significant word localization.This work presents a 300 VADER lexicon compatible Nigerian Pidgin sentiment tokens and their scores and a 14,000 gold standard Nigerian Pidgin tweets and their sentiments labels.
Artificial intelligence (AI) is already part of our daily lives and is playing a key role in defining the economic and social shape of the future. In 2018, the European Commission introduced its AI strategy able to compete in the next years with world powers such as China and US, but relying on the respect of European values and fundamental rights. As a result, most of the Member States have published their own National Strategy with the aim to work on a coordinated plan for Europe. In this pa- per, we present an ongoing study on how European countries are approaching the field of Artificial Intelligence, with its promises and risks, through the lens of their national AI strategies. In particular, we aim to investigate how European countries are investing in AI and to what extent the stated plans can contribute to the benefit of the whole society. This paper reports the main findings of a qualitative analysis of the investment plans reported in 15 European National Strategies.
Rotating savings and credit associations (roscas) are informal financial organizations common in settings where communities have reduced access to formal financial institutions. In a rosca, a fixed group of participants regularly contribute small sums of money to a pool. This pool is then allocated periodically using lotteries or auction mechanisms. Roscas are empirically well-studied in the development economics literature. Due to their dynamic nature, however, roscas have proven challenging to examine theoretically. Theoretical analyses within economics have made strong assumptions about features such as the number or homogeneity of participants, the information they possess, their value for saving across time, or the number of rounds. This work presents an algorithmic study of roscas. We use techniques from the price of anarchy in auctions to characterize their welfare properties under less restrictive assumptions than previous work. Using the smoothness framework of [Syrgkanis and Tardos, 2013], we show that most common auction-based roscas have equilibrium welfare within a constant factor of the best possible. This evidence further rationalizes these organizations’ prevalence. Roscas present many further questions where algorithmic game theory may be helpful; we discuss several promising directions.