Publications by Type: Journal Article

Forthcoming
Gowda S, Joshi S, Zhang H, Ghassemi M. Pulling Up by the Causal Bootstraps: Causal Data Augmentation for Pre-training Debiasing. Forthcoming.Abstract
Machine learning models achieve state-of-the-art performance on many supervised learning tasks. However, prior evidence suggests that these models may learn to rely on shortcut biases or spurious correlations (intuitively, correlations that do not hold in the test as they hold in train) for good predictive performance. Such models cannot be trusted in deployment environments to provide accurate predictions. While viewing the problem from a causal lens is known to be useful, the seamless integration of causation techniques into machine learning pipelines remains cumbersome and expensive. In this work, we study and extend a causal pre-training debiasing technique called causal bootstrapping (CB) under five practical confounded-data generation-acquisition scenarios (with known and unknown confounding). Under these settings, we systematically investigate the effect of confounding bias on deep learning model performance, demonstrating their propensity to rely on shortcut biases when these biases are not properly accounted for. We demonstrate that such a causal pre-training technique can significantly outperform existing base practices to mitigate confounding bias on real-world domain generalization benchmarking tasks. This systematic investigation underlines the importance of accounting for the underlying data-generating mechanisms and fortifying data-preprocessing pipelines with a causal framework to develop methods robust to confounding biases.
Singh H, Joshi S, Doshi-Velez F, Lakkaraju H. Learning under adversarial and interventional shifts. Forthcoming. Publisher's VersionAbstract
Machine learning models are often trained on data from one distribution and deployed on others. So it becomes important to design models that are robust to distribution shifts. Most of the existing work focuses on optimizing for either adversarial shifts or interventional shifts. Adversarial methods lack expressivity in representing plausible shifts as they consider shifts to joint distributions in the data. Interventional methods allow more expressivity but provide robustness to unbounded shifts, resulting in overly conservative models. In this work, we combine the complementary strengths of the two approaches and propose a new formulation, RISe, for designing robust models against a set of distribution shifts that are at the intersection of adversarial and interventional shifts. We employ the distributionally robust optimization framework to optimize the resulting objective in both supervised and reinforcement learning settings. Extensive experimentation with synthetic and real world datasets from healthcare demonstrate the efficacy of the proposed approach.
Parbhoo S, Joshi S, Doshi-Velez F. Generalizing Off-Policy Evaluation From a Causal Perspective For Sequential Decision-Making. Forthcoming. Publisher's VersionAbstract
     Assessing the effects of a policy based on observational data from a different policy is a common problem across several high-stake decision-making domains, and several off-policy evaluation (OPE) techniques have been proposed. However, these methods largely formulate OPE as a problem disassociated from the process used to generate the data (i.e. structural assumptions in the form of a causal graph). We argue that explicitly highlighting this association has important implications on our understanding of the fundamental limits of OPE. First, this implies that current formulation of OPE corresponds to a narrow set of tasks, i.e. a specific causal estimand which is focused on prospective evaluation of policies over populations or sub-populations. Second, we demonstrate how this association motivates natural desiderata to consider a general set of causal estimands, particularly extending the role of OPE for counterfactual off-policy evaluation at the level of individuals of the population. A precise description of the causal estimand highlights which OPE estimands are identifiable from observational data under the stated generative assumptions. For those OPE estimands that are not identifiable, the causal perspective further highlights where more experimental data is necessary, and highlights situations where human expertise can aid identification and estimation. Furthermore, many formalisms of OPE overlook the role of uncertainty entirely in the estimation process. We demonstrate how specifically characterising the causal estimand highlights the different sources of uncertainty and when human expertise can naturally manage this uncertainty. We discuss each of these aspects as actionable desiderata for future OPE research at scale and in-line with practical utility.
2201.08262.pdf
2023
Lee-Stronach C. Just probabilities. Noûs. 2023;n/a (n/a) :1-25. Publisher's VersionAbstract
Abstract I defend the thesis that legal standards of proof are reducible to thresholds of probability. Many reject this thesis because it appears to permit finding defendants liable solely on the basis of statistical evidence. To the contrary, I argue – by combining Thomson's (1986) causal analysis of legal evidence with formal methods of causal inference – that legal standards of proof can be reduced to probabilities, but that deriving these probabilities involves more than just statistics.
Ehrmann DE, Joshi S, Goodfellow SD, Mazwi ML, Eytan D. Making machine learning matter to clinicians: model actionability in medical decision-making. NPJ Digital Medicine. 2023;6 (1) :7. Publisher's VersionAbstract
Machine learning (ML) has the potential to transform patient care and outcomes. However, there are important differences between measuring the performance of ML models in silico and usefulness at the point of care. One lens to use to evaluate models during early development is actionability, which is currently undervalued. We propose a metric for actionability intended to be used before the evaluation of calibration and ultimately decision curve analysis and calculation of net benefit. Our metric should be viewed as part of an overarching effort to increase the number of pragmatic tools that identify a model’s possible clinical impacts.
s41746-023-00753-7.pdf
2022
Biswas A, Patro GK, Ganguly N, Gummadi KP, Chakraborty A. Towards Fair Recommendation in Two-Sided Platforms. ACM Transactions on the Web (TWEB). 2022;16 (2) :1-34. Publisher's Version
2021
Upadhyay S, Joshi S, Lakkaraju H. Towards robust and reliable algorithmic recourse. Advances in Neural Information Processing Systems. 2021;34 :16926-16937. Publisher's VersionAbstract
As predictive models are increasingly being deployed in high-stakes decision making (eg, loan approvals), there has been growing interest in post-hoc techniques which provide recourse to affected individuals. These techniques generate recourses under the assumption that the underlying predictive model does not change. However, in practice, models are often regularly updated for a variety of reasons (eg, dataset shifts), thereby rendering previously prescribed recourses ineffective. To address this problem, we propose a novel framework, RObust Algorithmic Recourse (ROAR), that leverages adversarial training for finding recourses that are robust to model shifts. To the best of our knowledge, this work proposes the first ever solution to this critical problem. We also carry out theoretical analysis which underscores the importance of constructing recourses that are robust to model shifts: 1) We quantify the probability of invalidation for recourses generated without accounting for model shifts. 2) We prove that the additional cost incurred due to the robust recourses output by our framework is bounded. Experimental evaluation on multiple synthetic and real-world datasets demonstrates the efficacy of the proposed framework.
2020
Xu L, Bondi E, Fang F, Perrault A, Wang K, Tambe M. Dual-Mandate Patrols: Multi-Armed Bandits for Green Security. arXiv:2009.06560 [cs, stat]. 2020. Publisher's VersionAbstract
Conservation efforts in green security domains to protect wildlife and forests are constrained by the limited availability of defenders (i.e., patrollers), who must patrol vast areas to protect from attackers (e.g., poachers or illegal loggers). Defenders must choose how much time to spend in each region of the protected area, balancing exploration of infrequently visited regions and exploitation of known hotspots. We formulate the problem as a stochastic multi-armed bandit, where each action represents a patrol strategy, enabling us to guarantee the rate of convergence of the patrolling policy. However, a naive bandit approach would compromise short-term performance for long-term optimality, resulting in animals poached and forests destroyed. To speed up performance, we leverage smoothness in the reward function and decomposability of actions. We show a synergy between Lipschitzcontinuity and decomposition as each aids the convergence of the other. In doing so, we bridge the gap between combinatorial and Lipschitz bandits, presenting a no-regret approach that tightens existing guarantees while optimizing for short-term performance. We demonstrate that our algorithm, LIZARD, improves performance on real-world poaching data from Cambodia.
aaai21_dual_mandate_patrols.pdf
Prins A, Mate A, Killian JA, Abebe R, Tambe M. Incorporating Healthcare Motivated Constraints in Restless Bandit Based Resource Allocation. NeurIPS 2020 Workshops: Challenges of Real World Reinforcement Learning, Machine Learning in Public Health (Best Lightning Paper), Machine Learning for Health (Best on Theme), Machine Learning for the Developing World. 2020. human_in_the_loop_rmab_short.pdf