Reverse AGT Workshop: Applied Econometrics


Friday, February 26, 2016, 2:30pm to 5:30pm


Maxwell Dworkin 119, 33 Oxford Street, Cambridge

Reverse AGT Workshop: "Applied Econometrics"

About this series:
At the Reverse AGT Workshop local economists will present an area of economic study for an algorithmic game theory (AGT) audience. The presentations will include a brief introduction to the area and several current research topics.  The schedule includes ample time for discussion to make connections to related research in AGT and to highlight research questions that methods from AGT might help to answer.  The workshop series is organized by Bobby Kleinberg, Ricardo Perez-Truglia and Glen Weyl in conjunction with The Center for Research on Computation and Society at Harvard and Microsoft Research.


2:15: Coffee and cookies

2:30: Emily Oster (Brown): Causality: Approaches and Limitations, slides
3:00: Q/A and discussion

3:15: Max Kasy (Harvard): Data and Decisions, slides
3:45: Q/A and discussion

4:00: Sendhil Mullainathan (Harvard): Making Good Policies with Bad Causal Inference: The Role of Prediction and Machine Learning
4:30: Q/A and discussion

4:45: Coffee and cookies
5:15: Summary discussion and closing comments


Causality: Approaches and Limitations
Emily Oster

Abstract: Establishing causal relationships in data is among the central challenges of applied social science. I will describe the core issues using the Rubin causal framework, and then discuss a set of possible solutions - selection on observables, selection on unobservables, regression discontinuity and others. I will highlight some common trade-offs between convincing causality and broad generalizability.

Data and Decisions
Max Kasy

Abstract: Many methodological debates reduce to math problems once the setting is precisely specified using the framework of statistical decision theory. Items to specify include the set of feasible actions, how actions are ultimately evaluated, and how uncertainty is dealt with. I will present two examples of this approach from my own research. The first example is a general argument against randomization, which applies in particular to experimental design. The second example is a characterization of the performance of machine learning procedures, where performance can be described in terms of a distance to an infeasible optimal estimator.

Making Good Policies with Bad Causal Inference: The Role of Prediction and Machine Learning
Sendhil Mullainathan

Abstract: In the last few decades, we have learned to be careful about causation, and have developed powerful tools for making causal inferences from data. Applying these tools has generated both policy impact and conceptual insights.  I will argue that there are a large class of problems where causal inference is largely unnecessary where, instead, prediction is the central challenge. These problems are ideally suited to machine learning and high dimensional data analysis tools. In this talk I will (1) try to delineate the difference between problems that require causation and problems that require prediction; (2) describe results from solving one such prediction problem in detail; (3) highlight the set of new statistical issues these problems raise; and (4) argue that solving these problems can also generate both policy impact and conceptual insights.