Game-Focused Learning in Cooperative and Non-Cooperative Games
In many real-world interactions between strategic agents, decision theoretic reasoning is complicated by a lack of understanding of the preferences or utility functions of the agents in the interaction. If we could solve the learning problem optimally, the models could be plugged in to well-developed tools from game theory. However, learning perfect models is rarely possible, requiring the learned model to reflect its use in strategic reasoning. In this talk, I discuss two different domains where these problems arise. The first is planning ranger patrols to thwart poachers: rangers observe historical poaching activity throughout the park and use this information to plan patrols that aim to gather snares and traps before they capture animals. We develop an end-to-end learning approach where the estimated utility derived by patrolling explicitly informs the models that are learned. The second setting is electricity markets, where we show that introducing consumer-representing agents increases market efficiency. These agents require knowledge of the preferences of the consumers they represent, and we develop an elicitation scheme that takes into account the subconscious reasoning that people use to make many electricity use decisions.
Andrew Perrault is a postdoctoral fellow at the Center for Research in Computation and Society at the Harvard John A. Paulson School of Engineering and Applied Science. His work focuses on strategic interactions between agents when the preferences of the agents are learned from data or by querying. He received his Ph.D. from University of Toronto in 2018 under the supervision of Craig Boutilier and his B.A. from Cornell University. His research interests include cooperative and non-cooperative game theory, machine learning, preference elicitation and market design.