Publications

Forthcoming
Gowda S, Joshi S, Zhang H, Ghassemi M. Pulling Up by the Causal Bootstraps: Causal Data Augmentation for Pre-training Debiasing. Forthcoming.Abstract
Machine learning models achieve state-of-the-art performance on many supervised learning tasks. However, prior evidence suggests that these models may learn to rely on shortcut biases or spurious correlations (intuitively, correlations that do not hold in the test as they hold in train) for good predictive performance. Such models cannot be trusted in deployment environments to provide accurate predictions. While viewing the problem from a causal lens is known to be useful, the seamless integration of causation techniques into machine learning pipelines remains cumbersome and expensive. In this work, we study and extend a causal pre-training debiasing technique called causal bootstrapping (CB) under five practical confounded-data generation-acquisition scenarios (with known and unknown confounding). Under these settings, we systematically investigate the effect of confounding bias on deep learning model performance, demonstrating their propensity to rely on shortcut biases when these biases are not properly accounted for. We demonstrate that such a causal pre-training technique can significantly outperform existing base practices to mitigate confounding bias on real-world domain generalization benchmarking tasks. This systematic investigation underlines the importance of accounting for the underlying data-generating mechanisms and fortifying data-preprocessing pipelines with a causal framework to develop methods robust to confounding biases.
Singh H, Joshi S, Doshi-Velez F, Lakkaraju H. Learning under adversarial and interventional shifts. Forthcoming. Publisher's VersionAbstract
Machine learning models are often trained on data from one distribution and deployed on others. So it becomes important to design models that are robust to distribution shifts. Most of the existing work focuses on optimizing for either adversarial shifts or interventional shifts. Adversarial methods lack expressivity in representing plausible shifts as they consider shifts to joint distributions in the data. Interventional methods allow more expressivity but provide robustness to unbounded shifts, resulting in overly conservative models. In this work, we combine the complementary strengths of the two approaches and propose a new formulation, RISe, for designing robust models against a set of distribution shifts that are at the intersection of adversarial and interventional shifts. We employ the distributionally robust optimization framework to optimize the resulting objective in both supervised and reinforcement learning settings. Extensive experimentation with synthetic and real world datasets from healthcare demonstrate the efficacy of the proposed approach.
Parbhoo S, Joshi S, Doshi-Velez F. Generalizing Off-Policy Evaluation From a Causal Perspective For Sequential Decision-Making. Forthcoming. Publisher's VersionAbstract
     Assessing the effects of a policy based on observational data from a different policy is a common problem across several high-stake decision-making domains, and several off-policy evaluation (OPE) techniques have been proposed. However, these methods largely formulate OPE as a problem disassociated from the process used to generate the data (i.e. structural assumptions in the form of a causal graph). We argue that explicitly highlighting this association has important implications on our understanding of the fundamental limits of OPE. First, this implies that current formulation of OPE corresponds to a narrow set of tasks, i.e. a specific causal estimand which is focused on prospective evaluation of policies over populations or sub-populations. Second, we demonstrate how this association motivates natural desiderata to consider a general set of causal estimands, particularly extending the role of OPE for counterfactual off-policy evaluation at the level of individuals of the population. A precise description of the causal estimand highlights which OPE estimands are identifiable from observational data under the stated generative assumptions. For those OPE estimands that are not identifiable, the causal perspective further highlights where more experimental data is necessary, and highlights situations where human expertise can aid identification and estimation. Furthermore, many formalisms of OPE overlook the role of uncertainty entirely in the estimation process. We demonstrate how specifically characterising the causal estimand highlights the different sources of uncertainty and when human expertise can naturally manage this uncertainty. We discuss each of these aspects as actionable desiderata for future OPE research at scale and in-line with practical utility.
2201.08262.pdf
Killian TW, Ghassemi M, Joshi S. Counterfactually Guided Off-policy Transfer in Clinical Settings, in Conference for Health, Inference, and Learning (CHIL) 2022. ; Forthcoming. Publisher's Version 2006.11654.pdf
Killian JA, Xu L, Biswas A, Tambe M. Restless and Uncertain: Robust Policies for Restless Bandits via Deep Multi-Agent Reinforcement Learning, in Uncertainty in Artificial Intelligence (UAI 2022). ; Forthcoming. arXiv
2023
Lee-Stronach C. Just probabilities. Noûs. 2023;n/a (n/a) :1-25. Publisher's VersionAbstract
Abstract I defend the thesis that legal standards of proof are reducible to thresholds of probability. Many reject this thesis because it appears to permit finding defendants liable solely on the basis of statistical evidence. To the contrary, I argue – by combining Thomson's (1986) causal analysis of legal evidence with formal methods of causal inference – that legal standards of proof can be reduced to probabilities, but that deriving these probabilities involves more than just statistics.
Ehrmann DE, Joshi S, Goodfellow SD, Mazwi ML, Eytan D. Making machine learning matter to clinicians: model actionability in medical decision-making. NPJ Digital Medicine. 2023;6 (1) :7. Publisher's VersionAbstract
Machine learning (ML) has the potential to transform patient care and outcomes. However, there are important differences between measuring the performance of ML models in silico and usefulness at the point of care. One lens to use to evaluate models during early development is actionability, which is currently undervalued. We propose a metric for actionability intended to be used before the evaluation of calibration and ultimately decision curve analysis and calculation of net benefit. Our metric should be viewed as part of an overarching effort to increase the number of pragmatic tools that identify a model’s possible clinical impacts.
s41746-023-00753-7.pdf
Biswas A, Killian JA, Diaz PR, Ghosh S, Tambe M. Fairness for Workers Who Pull the Arms: An Index Based Policy for Allocation of Restless Bandit Tasks, in 22nd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023). ; 2023.
Killian JA, Biswas A, Xu L, Verma S, Nair V, Taneja A, Madhiwala N, Hedge A, Diaz PR, Johnson-Yu S, et al. Robust Planning over Restless Groups: Engagement Interventions for a Large-Scale Maternal Telehealth Program, in Thirty-Seventh AAAI Conference on Artificial Intelligence (AAAI 2023). ; 2023.
2022
Pawelczyk M, Agarwal C, Joshi S, Upadhyay S, Lakkaraju H. Exploring Counterfactual Explanations through the lens of Adversarial Examples: A Theoretical and Empirical Analysis. International Conference on Artificial Intelligence and Statistics (AISTATS). 2022. Publisher's VersionAbstract
     As machine learning (ML) models become more widely deployed in high-stakes applications, counterfactual explanations have emerged as key tools for providing actionable model explanations in practice. Despite the growing popularity of counterfactual explanations, a deeper understanding of these explanations is still lacking. In this work, we systematically analyze counterfactual explanations through the lens of adversarial examples. We do so by formalizing the similarities between popular counterfactual explanation and adversarial example generation methods identifying conditions when they are equivalent. We then derive the upper bounds on the distances between the solutions output by counterfactual explanation and adversarial example generation methods, which we validate on several real-world data sets. By establishing these theoretical and empirical similarities between counterfactual explanations and adversarial examples, our work raises fundamental questions about the design and development of existing counterfactual explanation algorithms.
2106.09992.pdf
Xu L, Biswas A, Fang F, Tambe M. Ranked Prioritization of Groups in Combinatorial Bandit Allocation, in 31st International Joint Conference on Artificial Intelligence (IJCAI 2022). ; 2022. arXiv
Biswas A, Patro GK, Ganguly N, Gummadi KP, Chakraborty A. Towards Fair Recommendation in Two-Sided Platforms. ACM Transactions on the Web (TWEB). 2022;16 (2) :1-34. Publisher's Version
Narang S, Biswas A, Yadati N. On Achieving Leximin Fairness and Stability in Many-to-One Matchings, in Proceedings of the International Conference on Autonomous Agents and Multiagent Systems as an extended abstract (AAMAS 2022). ; 2022. arXiv
Mate A, Biswas A, Siebenbrunner C, Ghosh S, Tambe M. Efficient Algorithms for Finite Horizon and Streaming Restless Multi-Armed Bandit Problems, in Proceedings of the International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2022). ; 2022. arXiv
2021
Upadhyay S, Joshi S, Lakkaraju H. Towards robust and reliable algorithmic recourse. Advances in Neural Information Processing Systems. 2021;34 :16926-16937. Publisher's VersionAbstract
As predictive models are increasingly being deployed in high-stakes decision making (eg, loan approvals), there has been growing interest in post-hoc techniques which provide recourse to affected individuals. These techniques generate recourses under the assumption that the underlying predictive model does not change. However, in practice, models are often regularly updated for a variety of reasons (eg, dataset shifts), thereby rendering previously prescribed recourses ineffective. To address this problem, we propose a novel framework, RObust Algorithmic Recourse (ROAR), that leverages adversarial training for finding recourses that are robust to model shifts. To the best of our knowledge, this work proposes the first ever solution to this critical problem. We also carry out theoretical analysis which underscores the importance of constructing recourses that are robust to model shifts: 1) We quantify the probability of invalidation for recourses generated without accounting for model shifts. 2) We prove that the additional cost incurred due to the robust recourses output by our framework is bounded. Experimental evaluation on multiple synthetic and real-world datasets demonstrates the efficacy of the proposed framework.
Parbhoo S, Joshi S, Doshi-Velez F. Learning-to-defer for sequential medical decision-making under uncertainty. Proceedings of the International Conference on Machine Learning: Workshop on Neglected Assumptions in Causal Inference (ICML). 2021. Publisher's VersionAbstract
Learning-to-defer is a framework to automatically defer decision-making to a human expert when ML-based decisions are deemed unreliable. Existing learning-to-defer frameworks are not designed for sequential settings. That is, they defer at every instance independently, based on immediate predictions, while ignoring the potential long-term impact of these interventions. As a result, existing frameworks are myopic. Further, they do not defer adaptively, which is crucial when human interventions are costly. In this work, we propose Sequential Learning-to-Defer (SLTD), a framework for learning-to-defer to a domain expert in sequential decision-making settings. Contrary to existing literature, we pose the problem of learning-to-defer as model-based reinforcement learning (RL) to i) account for long-term consequences of ML-based actions using RL and ii) adaptively defer based on the dynamics (model-based). Our proposed framework determines whether to defer (at each time step) by quantifying whether a deferral now will improve the value compared to delaying deferral to the next time step. To quantify the improvement, we account for potential future deferrals. As a result, we learn a pre-emptive deferral policy (i.e. a policy that defers early if using the ML-based policy could worsen long-term outcomes). Our deferral policy is adaptive to the non-stationarity in the dynamics. We demonstrate that adaptive deferral via SLTD provides an improved trade-off between long-term outcomes and deferral frequency on synthetic, semi-synthetic, and real-world data with non-stationary dynamics. Finally, we interpret the deferral decision by decomposing the propagated (long-term) uncertainty around the outcome, to justify the deferral decision.
2109.06312.pdf
Zhang H, Dullerud N, Seyyed-Kalantari L, Morris Q, Joshi S, Ghassemi M. An Empirical Framework for Domain Generalization in Clinical Settings. Conference for Health, Inference, and Learning (CHIL) 2021. 2021. Publisher's VersionAbstract
     Clinical machine learning models experience significantly degraded performance in datasets not seen during training, e.g., new hospitals or populations. Recent developments in domain generalization offer a promising solution to this problem by creating models that learn invariances across environments. In this work, we benchmark the performance of eight domain generalization methods on multi-site clinical time series and medical imaging data. We introduce a framework to induce synthetic but realistic domain shifts and sampling bias to stress-test these methods over existing non-healthcare benchmarks. We find that current domain generalization methods do not consistently achieve significant gains in out-of-distribution performance over empirical risk minimization on real-world medical imaging data, in line with prior work on general imaging datasets. However, a subset of realistic induced-shift scenarios in clinical time series data do exhibit limited performance gains. We characterize these scenarios in detail, and recommend best practices for domain generalization in the clinical setting.
2103.11163.pdf
Jia F, Mate A, Li Z, Jabbari S, Chakraborty M, Tambe M, Wellman M, Vorobeychik Y. A Game-Theoretic Approach for Hierarchical Policy-Making. nd International (Virtual) Workshop on Autonomous Agents for Social Good (AASG 2021). 2021. Publisher's VersionAbstract
We present the design and analysis of a multi-level game-theoretic model of hierarchical policy-making, inspired by policy responses to the COVID-19 pandemic. Our model captures the potentially mismatched priorities among a hierarchy of policy-makers (e.g., federal, state, and local governments) with respect to two main cost components that have opposite dependence on the policy strength, such as post-intervention infection rates and the cost of policy implementation. Our model further includes a crucial third fac- tor in decisions: a cost of non-compliance with the policy-maker immediately above in the hierarchy, such as non-compliance of state with federal policies. Our first contribution is a closed-form approximation of a recently published agent-based model to com- pute the number of infections for any implemented policy. Second, we present a novel equilibrium selection criterion that addresses common issues with equilibrium multiplicity in our setting. Third, we propose a hierarchical algorithm based on best response dynamics for computing an approximate equilibrium of the hierarchical policy-making game consistent with our solution concept. Finally, we present an empirical investigation of equilibrium policy strategies in this game as a function of game parameters, such as the degree of centralization and disagreements about policy priorities among the agents, the extent of free riding as well as fairness in the distribution of costs.
aasg_2021_paper_9.pdf
Puri A, Bondi E. Space, Time, and Counts: Improved Human vs Animal Detection in Thermal Infrared Drone Videos for Prevention of Wildlife Poaching. KDD 2021 Fragile Earth Workshop. 2021. feed_kdd_2021_final_copy_2.pdf
Killian JA, Biswas A, Shah S, Tambe M. Q-Learning Lagrange Policies for Multi-Action Restless Bandits, in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2021). ; 2021. arXiv

Pages