Publications

    Finocchiaro J, Abebe R, Shirali A. Participatory Objective Design via Preference Elicitation, in Fairness, Accountability, and Transparency (FAccT). Rio de Janeiro: ACM ; Forthcoming.Abstract

    In standard resource allocation problems, the designer sets the objective---such as utilitarian social welfare---that captures a societal goal and solves for the optimal allocation subject to fairness and item availability constraints. The participants, on the other hand, specify their preferences for the items being allocated, e.g., through stating how they rank the items or expressing their cardinal utility for each item. The objective function, which guides the overall allocation, is therefore determined by the designer in a top-down manner, whereas participants can only express their preferences for the items. This standard preference elicitation stage limits participants' ability to express preferences for the overall allocation, such as the level of inequality, and influence the overall objective function.

    In this work, we examine whether it is possible to use this bottom-up preference elicitation stage to enable participants to express not only their preferences for individual items but also their preferences for the overall allocation, thereby indirectly influencing the objective function. We examine this question using a well-studied resource allocation problem where mm divisible items must be allocated to nn agents, who express their cardinal utilities over the items. The designer aims to optimize for the sum of the agents' utilities for the items they receive. In particular, this utilitarian objective is agnostic to the overall inequality level. We consider a setting where the agents' true utility is a function not only of their preferences for the items, but also the overall level of inequality. We model this using a popular social preference model from behavioral economics by \citeauthor{fehr1999theory}, where agents can express levels of inequality aversion.

    We conduct a theoretical examination of this problem and show that there can be large gains in social welfare if the designer uses this richer inequality-aware preference model, instead of the standard inequality-agnostic preference model. Further, if we take the standard inequality-agnostic welfare as the benchmark, we show that the relative loss of welfare can be tightly bounded--shown to be independent of the number of agents and linear in the level of inequality aversion. With further assumptions on the preferences, we provide strictly tighter, distribution-free, and parametric bounds on the loss of welfare. We also discuss the worst-case drop in inequality-agnostic utility an agent might incur as a consequence of a designer allocating items using the inequality-averse preferences. We conclude with a discussion on possible designs to elicit the preferences of strategic agents over the goods and fairness. Taken together, our results argue for potentially large gains that can be obtained from using the richer social preference model and demonstrate the relatively minor losses from using the standard model, highlighting a promising avenue for using preference elicitation to empower participants to influence the overall objective function.

    Gowda S, Joshi S, Zhang H, Ghassemi M. Pulling Up by the Causal Bootstraps: Causal Data Augmentation for Pre-training Debiasing. Forthcoming.Abstract
    Machine learning models achieve state-of-the-art performance on many supervised learning tasks. However, prior evidence suggests that these models may learn to rely on shortcut biases or spurious correlations (intuitively, correlations that do not hold in the test as they hold in train) for good predictive performance. Such models cannot be trusted in deployment environments to provide accurate predictions. While viewing the problem from a causal lens is known to be useful, the seamless integration of causation techniques into machine learning pipelines remains cumbersome and expensive. In this work, we study and extend a causal pre-training debiasing technique called causal bootstrapping (CB) under five practical confounded-data generation-acquisition scenarios (with known and unknown confounding). Under these settings, we systematically investigate the effect of confounding bias on deep learning model performance, demonstrating their propensity to rely on shortcut biases when these biases are not properly accounted for. We demonstrate that such a causal pre-training technique can significantly outperform existing base practices to mitigate confounding bias on real-world domain generalization benchmarking tasks. This systematic investigation underlines the importance of accounting for the underlying data-generating mechanisms and fortifying data-preprocessing pipelines with a causal framework to develop methods robust to confounding biases.
    Singh H, Joshi S, Doshi-Velez F, Lakkaraju H. Learning under adversarial and interventional shifts. Forthcoming. Publisher's VersionAbstract
    Machine learning models are often trained on data from one distribution and deployed on others. So it becomes important to design models that are robust to distribution shifts. Most of the existing work focuses on optimizing for either adversarial shifts or interventional shifts. Adversarial methods lack expressivity in representing plausible shifts as they consider shifts to joint distributions in the data. Interventional methods allow more expressivity but provide robustness to unbounded shifts, resulting in overly conservative models. In this work, we combine the complementary strengths of the two approaches and propose a new formulation, RISe, for designing robust models against a set of distribution shifts that are at the intersection of adversarial and interventional shifts. We employ the distributionally robust optimization framework to optimize the resulting objective in both supervised and reinforcement learning settings. Extensive experimentation with synthetic and real world datasets from healthcare demonstrate the efficacy of the proposed approach.
    Parbhoo S, Joshi S, Doshi-Velez F. Generalizing Off-Policy Evaluation From a Causal Perspective For Sequential Decision-Making. Forthcoming. Publisher's VersionAbstract
         Assessing the effects of a policy based on observational data from a different policy is a common problem across several high-stake decision-making domains, and several off-policy evaluation (OPE) techniques have been proposed. However, these methods largely formulate OPE as a problem disassociated from the process used to generate the data (i.e. structural assumptions in the form of a causal graph). We argue that explicitly highlighting this association has important implications on our understanding of the fundamental limits of OPE. First, this implies that current formulation of OPE corresponds to a narrow set of tasks, i.e. a specific causal estimand which is focused on prospective evaluation of policies over populations or sub-populations. Second, we demonstrate how this association motivates natural desiderata to consider a general set of causal estimands, particularly extending the role of OPE for counterfactual off-policy evaluation at the level of individuals of the population. A precise description of the causal estimand highlights which OPE estimands are identifiable from observational data under the stated generative assumptions. For those OPE estimands that are not identifiable, the causal perspective further highlights where more experimental data is necessary, and highlights situations where human expertise can aid identification and estimation. Furthermore, many formalisms of OPE overlook the role of uncertainty entirely in the estimation process. We demonstrate how specifically characterising the causal estimand highlights the different sources of uncertainty and when human expertise can naturally manage this uncertainty. We discuss each of these aspects as actionable desiderata for future OPE research at scale and in-line with practical utility.
    Finocchiaro J. Using Property Elicitation to Understand the Impacts of Fairness Regularizers. Fairness, Accountability, and Transparency (FAccT). 2024. Publisher's VersionAbstract

    Predictive algorithms are often trained by optimizing some loss function, to which regularization functions are added to impose a penalty for violating constraints. As expected, the addition of such regularization functions can change the minimizer of the objective. It is not well-understood which regularizers change the minimizer of the loss, and, when the minimizer does change, how it changes. We use property elicitation to take first steps towards understanding the joint relationship between the loss, regularization functions, and the optimal decision for a given problem instance. In particular, we give a necessary and sufficient condition on loss and regularizer pairs for when a property changes with the addition of the regularizer, and examine some commonly used regularizers satisfying this condition from the fair machine learning literature. We empirically demonstrate how algorithmic decision-making changes as a function of both data distribution changes and hardness of the constraints.

    Lee-Stronach C. Just probabilities. Noûs. 2023;n/a (n/a) :1-25. Publisher's VersionAbstract
    Abstract I defend the thesis that legal standards of proof are reducible to thresholds of probability. Many reject this thesis because it appears to permit finding defendants liable solely on the basis of statistical evidence. To the contrary, I argue – by combining Thomson's (1986) causal analysis of legal evidence with formal methods of causal inference – that legal standards of proof can be reduced to probabilities, but that deriving these probabilities involves more than just statistics.
    Ehrmann DE, Joshi S, Goodfellow SD, Mazwi ML, Eytan D. Making machine learning matter to clinicians: model actionability in medical decision-making. NPJ Digital Medicine. 2023;6 (1) :7. Publisher's VersionAbstract
    Machine learning (ML) has the potential to transform patient care and outcomes. However, there are important differences between measuring the performance of ML models in silico and usefulness at the point of care. One lens to use to evaluate models during early development is actionability, which is currently undervalued. We propose a metric for actionability intended to be used before the evaluation of calibration and ultimately decision curve analysis and calculation of net benefit. Our metric should be viewed as part of an overarching effort to increase the number of pragmatic tools that identify a model’s possible clinical impacts.
    Pawelczyk M, Agarwal C, Joshi S, Upadhyay S, Lakkaraju H. Exploring Counterfactual Explanations through the lens of Adversarial Examples: A Theoretical and Empirical Analysis. International Conference on Artificial Intelligence and Statistics (AISTATS). 2022. Publisher's VersionAbstract
         As machine learning (ML) models become more widely deployed in high-stakes applications, counterfactual explanations have emerged as key tools for providing actionable model explanations in practice. Despite the growing popularity of counterfactual explanations, a deeper understanding of these explanations is still lacking. In this work, we systematically analyze counterfactual explanations through the lens of adversarial examples. We do so by formalizing the similarities between popular counterfactual explanation and adversarial example generation methods identifying conditions when they are equivalent. We then derive the upper bounds on the distances between the solutions output by counterfactual explanation and adversarial example generation methods, which we validate on several real-world data sets. By establishing these theoretical and empirical similarities between counterfactual explanations and adversarial examples, our work raises fundamental questions about the design and development of existing counterfactual explanation algorithms.
    Upadhyay S, Joshi S, Lakkaraju H. Towards robust and reliable algorithmic recourse. Advances in Neural Information Processing Systems. 2021;34 :16926-16937. Publisher's VersionAbstract
    As predictive models are increasingly being deployed in high-stakes decision making (eg, loan approvals), there has been growing interest in post-hoc techniques which provide recourse to affected individuals. These techniques generate recourses under the assumption that the underlying predictive model does not change. However, in practice, models are often regularly updated for a variety of reasons (eg, dataset shifts), thereby rendering previously prescribed recourses ineffective. To address this problem, we propose a novel framework, RObust Algorithmic Recourse (ROAR), that leverages adversarial training for finding recourses that are robust to model shifts. To the best of our knowledge, this work proposes the first ever solution to this critical problem. We also carry out theoretical analysis which underscores the importance of constructing recourses that are robust to model shifts: 1) We quantify the probability of invalidation for recourses generated without accounting for model shifts. 2) We prove that the additional cost incurred due to the robust recourses output by our framework is bounded. Experimental evaluation on multiple synthetic and real-world datasets demonstrates the efficacy of the proposed framework.
    Parbhoo S, Joshi S, Doshi-Velez F. Learning-to-defer for sequential medical decision-making under uncertainty. Proceedings of the International Conference on Machine Learning: Workshop on Neglected Assumptions in Causal Inference (ICML). 2021. Publisher's VersionAbstract
    Learning-to-defer is a framework to automatically defer decision-making to a human expert when ML-based decisions are deemed unreliable. Existing learning-to-defer frameworks are not designed for sequential settings. That is, they defer at every instance independently, based on immediate predictions, while ignoring the potential long-term impact of these interventions. As a result, existing frameworks are myopic. Further, they do not defer adaptively, which is crucial when human interventions are costly. In this work, we propose Sequential Learning-to-Defer (SLTD), a framework for learning-to-defer to a domain expert in sequential decision-making settings. Contrary to existing literature, we pose the problem of learning-to-defer as model-based reinforcement learning (RL) to i) account for long-term consequences of ML-based actions using RL and ii) adaptively defer based on the dynamics (model-based). Our proposed framework determines whether to defer (at each time step) by quantifying whether a deferral now will improve the value compared to delaying deferral to the next time step. To quantify the improvement, we account for potential future deferrals. As a result, we learn a pre-emptive deferral policy (i.e. a policy that defers early if using the ML-based policy could worsen long-term outcomes). Our deferral policy is adaptive to the non-stationarity in the dynamics. We demonstrate that adaptive deferral via SLTD provides an improved trade-off between long-term outcomes and deferral frequency on synthetic, semi-synthetic, and real-world data with non-stationary dynamics. Finally, we interpret the deferral decision by decomposing the propagated (long-term) uncertainty around the outcome, to justify the deferral decision.
    Zhang H, Dullerud N, Seyyed-Kalantari L, Morris Q, Joshi S, Ghassemi M. An Empirical Framework for Domain Generalization in Clinical Settings. Conference for Health, Inference, and Learning (CHIL) 2021. 2021. Publisher's VersionAbstract
         Clinical machine learning models experience significantly degraded performance in datasets not seen during training, e.g., new hospitals or populations. Recent developments in domain generalization offer a promising solution to this problem by creating models that learn invariances across environments. In this work, we benchmark the performance of eight domain generalization methods on multi-site clinical time series and medical imaging data. We introduce a framework to induce synthetic but realistic domain shifts and sampling bias to stress-test these methods over existing non-healthcare benchmarks. We find that current domain generalization methods do not consistently achieve significant gains in out-of-distribution performance over empirical risk minimization on real-world medical imaging data, in line with prior work on general imaging datasets. However, a subset of realistic induced-shift scenarios in clinical time series data do exhibit limited performance gains. We characterize these scenarios in detail, and recommend best practices for domain generalization in the clinical setting.
    Jia F, Mate A, Li Z, Jabbari S, Chakraborty M, Tambe M, Wellman M, Vorobeychik Y. A Game-Theoretic Approach for Hierarchical Policy-Making. nd International (Virtual) Workshop on Autonomous Agents for Social Good (AASG 2021). 2021. Publisher's VersionAbstract
    We present the design and analysis of a multi-level game-theoretic model of hierarchical policy-making, inspired by policy responses to the COVID-19 pandemic. Our model captures the potentially mismatched priorities among a hierarchy of policy-makers (e.g., federal, state, and local governments) with respect to two main cost components that have opposite dependence on the policy strength, such as post-intervention infection rates and the cost of policy implementation. Our model further includes a crucial third fac- tor in decisions: a cost of non-compliance with the policy-maker immediately above in the hierarchy, such as non-compliance of state with federal policies. Our first contribution is a closed-form approximation of a recently published agent-based model to com- pute the number of infections for any implemented policy. Second, we present a novel equilibrium selection criterion that addresses common issues with equilibrium multiplicity in our setting. Third, we propose a hierarchical algorithm based on best response dynamics for computing an approximate equilibrium of the hierarchical policy-making game consistent with our solution concept. Finally, we present an empirical investigation of equilibrium policy strategies in this game as a function of game parameters, such as the degree of centralization and disagreements about policy priorities among the agents, the extent of free riding as well as fairness in the distribution of costs.

Pages