Date:
Harvineet Singh, New York University
Machine learning models are often trained on one dataset and deployed on others. So it becomes important to design models that are robust to distribution shifts. Robustness is typically considered either against adversarial or interventional shifts in data. However, neither of these are ideal standalone. While adversarial shifts are often unrealistic, interventional shifts can result in conservative models. In this work, we address these shortcomings and propose a new formulation for designing models that are robust to a set of distribution shifts that are at the intersection of adversarial and interventional shifts. We employ the distributionally-robust optimization framework to optimize the resulting objective both in supervised as well as reinforcement learning settings. We also carry out extensive experimentation with real world datasets from healthcare to demonstrate the efficacy of the proposed framework.
Co-mentors
Hima Lakkaraju and Finale Doshi-Velez