Sociotechnical systems within cities are now equipped with machine learning algorithms in hopes to increase efficiency and functionality by modeling and predicting trends. Machine learning algorithms have been applied in these domains to address challenges such as balancing the distribution of bikes throughout a city and identifying demand hotspots for ride sharing drivers. However, these algorithms applied to challenges in sociotechnical systems have exacerbated social inequalities due to previous bias in data sets or the lack of data from marginalized communities. In this paper, I will address how smart mobility initiatives in cities use machine learning algorithms to address challenges. I will also address how these algorithms unintentionally discriminate against features such as socioeconomic status to motivate the importance of algorithmic fairness. Using the bike sharing pro- gram in Pittsburgh, PA, I will present a position on how discrimination can be eliminated from the pipeline using Bayesian Optimization.
Nigerian English adaptation, Pidgin, has evolved over the years through multi-language code switch- ing, code mixing and linguistic adaptation. While Pidgin preserves many of the words in the normal English language corpus, both in spelling and pronunciation, the fundamental meaning of these words have changed significantly. For example, ‘ginger’ is not a plant but an expression of motivation and ’tank’ is not a container but an expression of gratitude. The implication is that the current approach of using direct English sentiment analysis of social media text from Nigeria is sub-optimal, as it will not be able to capture the semantic variation and contextual evolution in the contemporary meaning of these words. In practice, while many words in Nigerian Pidgin adaptation are the same as the standard English, the full English language based sentiment analysis models are not de- signed to capture the full intent of the Nigerian pid- gin when used alone or code-mixed. By augment- ing scarce human labelled code-changed text with ample synthetic code-reformatted text and meaning, we achieve significant improvements in sentiment scoring. Our research explores how to understand sentiment in an intrasentential code mixing and switching context where there has been significant word localization.This work presents a 300 VADER lexicon compatible Nigerian Pidgin sentiment tokens and their scores and a 14,000 gold standard Nigerian Pidgin tweets and their sentiments labels.
Artificial intelligence (AI) is already part of our daily lives and is playing a key role in defining the economic and social shape of the future. In 2018, the European Commission introduced its AI strategy able to compete in the next years with world powers such as China and US, but relying on the respect of European values and fundamental rights. As a result, most of the Member States have published their own National Strategy with the aim to work on a coordinated plan for Europe. In this pa- per, we present an ongoing study on how European countries are approaching the field of Artificial Intelligence, with its promises and risks, through the lens of their national AI strategies. In particular, we aim to investigate how European countries are investing in AI and to what extent the stated plans can contribute to the benefit of the whole society. This paper reports the main findings of a qualitative analysis of the investment plans reported in 15 European National Strategies.
Rotating savings and credit associations (roscas) are informal financial organizations common in settings where communities have reduced access to formal financial institutions. In a rosca, a fixed group of participants regularly contribute small sums of money to a pool. This pool is then allocated periodically using lotteries or auction mechanisms. Roscas are empirically well-studied in the development economics literature. Due to their dynamic nature, however, roscas have proven challenging to examine theoretically. Theoretical analyses within economics have made strong assumptions about features such as the number or homogeneity of participants, the information they possess, their value for saving across time, or the number of rounds. This work presents an algorithmic study of roscas. We use techniques from the price of anarchy in auctions to characterize their welfare properties under less restrictive assumptions than previous work. Using the smoothness framework of [Syrgkanis and Tardos, 2013], we show that most common auction-based roscas have equilibrium welfare within a constant factor of the best possible. This evidence further rationalizes these organizations’ prevalence. Roscas present many further questions where algorithmic game theory may be helpful; we discuss several promising directions.
We present a large-scale case study using agent plat- form that facilitated and gathered public opinions on internet-based town discussion. The hypothesis set to test how agent-mediated argumentative messages leads the discussion structure in a “Issue-giving” and “Issue-solving” themes involving human precipitants. The agent facilitation’s mechanism set to dynamically react to participants by moderating and supporting on the bases of “issue-solving” stance in both discussion types. We conducted two large- scale experiments to evaluate the influence of agent mediation while looking at elements of both discus- sion themes. The first experiment themed as a “is- sue-giving” with 188 participants, and the second experiment set as a “issue-solving” with 1076 citizens from Afghanistan. The goal of the first experiment is to contribute insights about the scale of the issues the residents facing at districts 1 and 2. The goal of second experiment is to contribute insights about the scale of issues and their solutions. In the first experiment, we found that the due to participants started by taking part with theme stance “is- sue-giving” the first post of submitters were issues, hence the themed “issue-giving” increased the number of issues but when agent started posting facilitation messages, the participants stance changed from issue-giving to issue-solving stance, while in second experiment the participants stance remain the same as the theme type.
In this paper we consider the problem of building Boolean rule sets in disjunctive normal form (DNF), an interpretable model for binary classification, subject to fairness constraints. We formulate the problem as an integer program that maximizes classification accuracy with explicit constraints on two different measures of classification parity: equality of opportunity, and equalized odds. A column generation framework, with a novel formulation, is used to efficiently search over exponentially many possible rules, eliminating the need for heuristic rule mining. Compared to CART and Logistic Regression, two interpretable machine learning algorithms, our method produces interpretable classifiers that have superior performance with respect to both fair- ness metrics.
Disease dynamics, human mobility, and public policies co-evolve during a pandemic such as COVID-19. Understanding dynamic human mobility changes and spatial interaction patterns are crucial for understanding and forecasting COVID- 19 dynamics. We introduce a novel graph-based neural network(GNN) to incorporate global aggregated mobility flows for a better understanding of the impact of human mobility on COVID-19 dynamics as well as better forecasting of disease dynamics. We propose a recurrent message passing graph neural network that embeds spatio-temporal disease dynamics and human mobility dynamics for daily state-level new confirmed cases forecast- ing. This work represents one of the early papers on the use of GNNs to forecast COVID-19 incidence dynamics and our methods are competitive to exist- ing methods. We show that the spatial and temporal dynamic mobility graph leveraged by the graph neural network enables better long-term forecasting performance compared to baselines.
Online search engines are an extremely popular tool for seeking information. However, the results returned sometimes exhibit undesirable or even wrongful forms of bias, such as with respect to gender or race. In this paper, we consider the problem of fair keyword recommendation, in which the goal is to suggest keywords that are relevant to a user’s search query, but exhibit less (or opposite) bias. We present a multi-objective method using word embedding to suggest alternate keywords for biased keywords present in a search query. We perform a qualitative analysis on pairs of subReddits from Reddit.com (e.g., r/AskMen vs. r/AskWomen, r/Republican vs. r/democrats). Our results demonstrate the efficacy of the proposed method and illustrate subtle linguistic differences between subReddits.
Fair representation learning is an important task in many real-world domains, with the goal of finding a performant model that obeys fairness requirements. We present an adversarial representation learning algorithm that learns an informative representation while not exposing sensitive features. Our goal is to train an embedding such that it has good performance on a target task while not exposing sensitive information as measured by the performance of an optimally trained adversary. Our approach directly trains the embedding with these dual objectives in mind by implicitly differentiating through the optimal adversary’s training procedure. To this end, we derive implicit gradients of the optimal logistic regression parameters with respect to the input training embeddings, and use the fully-trained logistic regression as an adversary. As a result, we are able to train a model without alternating min max optimization, leading to better training stability and improved performance. Given the flexibility of our module for differentiable programming, we evaluate the impact of using implicit gradients in two adversarial fairness-centric formulations. We present quantitative results on the trade-offs of target and fairness tasks in several real-world domains.
This paper focuses on learning index-based policies in rest- less multi-armed bandits (RMAB) with applications to public health concerns such as maternal health. Maternal health is a very important public health concern. It refers to the health of women during their pregnancy, childbirth, and the post- natal period. Although maternal health has received significant attention [World Health Organization, 2015], the number of maternal deaths remains unacceptably high, mainly because of the delay in obtaining adequate care [Thaddeus and Maine, 1994]. Most maternal deaths can be prevented by providing timely preventive care information. However, such information is not easily accessible by underprivileged and low-income communities. For ensuring timely information, a non-profit organization, called ARMMAN , carries out a free call-based program called mMitra for spreading preventive care information among pregnant women. Enrollment in this program happens through hospitals and non-government organizations. Each enrolled woman receives around 140 automated voice calls, throughout their pregnancy period and up to 12 months after childbirth. Each call equips women with critical life-saving healthcare information. This program pro- vides support for around 80 weeks. To achieve the vision of improving the well-being of the enrolled women, it is important to ensure that they listen to most of the information sent to them via the automated calls. However, the organization observed that, for many women, their engagement (i.e., the overall time they spend listening to the automated calls) gradually decreases. One way to improve their engagement is by providing an intervention (that would involve a personal visit by health-care worker). These interventions require the dedicated time of the health workers, which is often limited. Thus, only a small fraction of the overall enrolled women can be provided with interventions during a time period. More- over, the extent to which the engagement improves upon intervention varies among individuals. Hence, it is important to carefully choose the beneficiaries who should be provided interventions at a particular time period. This is a challenging problem owing to multiple key reasons: (i) Engagement of the individual beneficiaries is un- certain and changes organically over time; (ii) Improvement in the engagement of a beneficiary post-intervention is un- certain; (iii) Decision making with respect to interventions (which beneficiaries should have intervention) is sequential, i.e., decisions at a step have an impact on the state of beneficiaries and decisions to be taken at the next step; (iv) Number of interventions are budgeted and are significantly smaller than the total number of beneficiaries. Due to the uncertainty, sequential nature of decision making, and weak dependency amongst patients through a budget, existing research [Lee et al., 2019; Mate et al., 2020; Bhattacharya, 2018] in health interventions has justifiably employed RMABs. However, existing research focuses on the planning problem assuming a priori knowledge of the underlying uncertainty model, which can be quite challenging to obtain. Thus, we focus on learning intervention decisions in absence of the knowledge of underlying uncertainty.
In this work, we look at cases where we want a classifier to be both fair and interpretable, and find that it is necessary to make trade-offs between these two properties. We have theoretical results to demonstrate this tension between the two requirements. More specifically, we consider a formal framework to build simple classifiers as a means to attain interpretability, and show that simple classifiers are strictly improvable, in the sense that every simple classifier can be replaced by a more complex classifier that strictly improves both fairness and accuracy.
The concerns of fairness, and privacy, in machine learning based systems have received a lot of attention in the research community recently, but have primarily been studied in isolation. In this work, we look at cases where we want to satisfy both these properties simultaneously, and find that it may be necessary to make trade-offs between them. We prove a theoretical result to demonstrate this, which considers the issue of compatibility between fair- ness and differential privacy of learning algorithms. In particular, we prove an impossibility theorem which shows that even in simple binary classification settings, one cannot design an accurate learn- ing algorithm that is both ε-differentially private and fair (even approximately).
This position paper discusses the challenges of al- locating legal and ethical responsibility to stake- holders when artificially intelligent systems (AISs) are used in clinical decision making and offers one possible solution. Clinicians have been identified as at risk of being subject to the tort of negligence if a patient is harmed as a result of their using an AIS in clinical decision making. An ethical model of prospective and retrospective personal moral re- sponsibility is suggested to avoid clinicians being treated as a ‘moral crumple zone’. The adoption of risk pooling could support a shared model of re- sponsibility that could promote both prospective and retrospective personal moral responsibility whist avoiding the need for negligence claims.
Ethical AI spans a gamut of considerations. Among these, the most popular ones, fairness and interpretability, have remained largely distinct in technical pursuits. We discuss and elucidate the differences between fairness and interpretability across a variety of dimensions. Further, we develop two principles-based frameworks towards develop- ing ethical AI for the future that embrace aspects of both fairness and interpretability. First, interpretability for fairness proposes instantiating interpretability within the realm of fairness to develop a new breed of ethical AI. Second, fairness and interpretability initiates deliberations on bringing the best aspects of both together. We hope that these two frameworks will contribute to intensify- ing scholarly discussions on new frontiers of ethical AI that brings together fairness and interpretability.
A robust testing program is necessary for contain- ing the spread of COVID-19 infections before a vaccine becomes available. However, due to an acute shortage of testing kits (especially in low- resource developing countries), designing an opti- mal testing program/strategy is a challenging prob- lem to solve. Prior literature on testing strategies suffers from two major limitations: (i) it does not account for the trade-off between testing of symp- tomatic and asymptomatic individuals, and (ii) it primarily focuses on static testing strategies, which leads to significant shortcomings in the testing pro- gram’s effectiveness. In this paper, we introduced a scalable Monte Carlo tree search based algorithm named DOCTOR, and use it to generate the op- timal testing strategies for the COVID-19. In our experiment, DOCTOR’s strategies result in ∼40% fewer COVID-19 infections (over one month) as compared to state-of-the-art static baselines. Our work complements the growing body of research on COVID-19, and serves as a proof-of-concept that illustrates the benefit of having an AI-driven adaptive testing strategy for COVID-19.
Having reliable and up-to-date poverty data is a prerequisite for monitoring the United Nations Sustainable Development Goals (SDGs) and for planning effective poverty reduction interventions. Un- fortunately, traditional data sources are often out- dated or lacking appropriate disaggregation. As a remedy, satellite imagery has recently become prominent in obtaining geographically-fine-grained and up-to-date poverty estimates. Satellite data can pick up signals of economic activity by detecting light at night, it can pick up development status by detecting infrastructure such as roads, and it can pick up signals for individual household wealth by detecting different building footprints and roof types. It can, however, not look inside the house- holds and pick up signals from individuals. On the other hand, alternative data sources such as audience estimates from Facebook’s advertising plat- form provide insights into the devices and inter- net connection types used by individuals in differ- ent locations. Previous work has shown the value of such anonymous, publicly-accessible advertising data from Facebook for studying migration, gender gaps, crime rates, and health, among others. In this work, we evaluate the added value of using Face- book data over satellite data for mapping socioeconomic development in two low and middle income countries – the Philippines and India. We show that Facebook features perform roughly similar to satellite data in the Philippines with value added for ur- ban locations. In India, however, where Facebook penetration is lower, satellite data perform better.
Traditionally, research into the fair allocation of indivisible goods has focused on individual fairness and group fairness. In this paper, we explore the co-existence of individual envy-freeness (i-EF) and its group counterpart, group weighted envy-freeness (g-WEF). We propose several polynomial-time algorithms that can provably achieve i-EF and g-WEF simultaneously in various degrees of approximation under three different conditions on the agents’ valuation functions: (i) when agents have identical additive valuation functions, i-EFX and g-WEF1 can be achieved simultaneously; (ii) when agents within a group share a common valuation function, an allocation satisfying both i-EF1 and g-WEF1 exists; and (iii) when agents’ valuations for goods within a group differ, we show that while maintaining i-EF1, we can achieve a 1 - 3 approximation to g-WEF1 in expectation. In addition, we introduce several novel fairness characterizations that exploit inherent group structures and their relation to individuals, such as proportional envy-freeness and group stability. We show that our algorithms can guarantee these properties approximately in polynomial time. Our results thus provide a first step into connecting individual and group fairness in the allocation of indivisible goods.
We analyze statistical discrimination using a multi-armed bandit model where myopic firms face candidate workers arriving with heterogeneous observable characteristics. The association between the worker’s skill and characteristics is unknown ex ante; thus, firms need to learn it. In such an environment, laissez- faire may result in a highly unfair and inefficient outcome—myopic firms are reluctant to hire minority workers because the lack of data about minority workers prevents accurate estimation of their performance. Consequently, minority groups could be perpetually underestimated—they are never hired, and therefore, data about them is never accumulated. We proved that this problem becomes more seri- ous when the population ratio is imbalanced, as is the case in many extant discrimination problems. We consider two affirmative-action policies for solving this dilemma: One is a subsidy rule that is based on the popular upper confidence bound algorithm, and another is the Rooney Rule, which requires firms to interview at least one minority worker for each hiring opportunity. Our results indicate temporary affirmative actions are effective for statistical dis- crimination caused by data insufficiency.
Artificial Intelligence (AI), as a collection of tech- nologies, but more so as a growing component of the global mode of production, has a significant im- pact on gender, specifically gendered labour. In this position paper we argue that the dominant aspect of AI industry’s impact on gender is more that the pro- duction and reproduction of epistemic biases which is the focus of contemporary research but is rather a material impact. We draw attention to how as a part of a larger economic structure the AI industry is altering the nature of work, expanding platformi- sation, and thus increasing precarity which is push- ing women out of the labour force. We state that this is a neglected concern and specific challenge worthy of attention for the AI research community.
Fine-grained classification aims at distinguishing between items with similar global perception and patterns, but that differ by minute details. Our primary challenges comes from both small inter-class variations and large intra-class variations. In this article, we propose to combine several innovations to improve fine-grained classification within the use-case of fauna, which is of practical interest for experts. We utilize geo-spatiotemporal data to enrich the picture information and further improve the performance. We also investigate state-of-the-art methods for handling the imbalanced data issue.