Max Kleiman-Weiner: Reverse Engineering Human Morality

Date: 

Monday, October 21, 2019, 12:00pm to 1:00pm

Location: 

Maxwell Dworkin 119
Reverse Engineering Human Morality

Human morality enables successful and repeated collaboration. We collaborate with others to accomplish together what none of us can do on our own, share the benefits fairly, and trust others to do the same. Even young children play together guided by normative principles that are unparalleled in other animal species. I seek to understand this everyday morality in engineering terms. How do humans flexibly learn and use moral knowledge? How can we apply these principles to build moral and fair machines? I present an account of human moral learning and judgment based on inverse planning and Bayesian inference. Our computational framework explains quantitatively how people learn abstract moral theories from sparse examples, share resources fairly, and judge others actions as right or wrong.

Dr. Max Kleiman-Weiner is a post-doctoral fellow at the Data Science Institute and Center for Research on Computation and Society (CRCS) within the computer science and psychology departments at Harvard. He did his PhD in Computational Cognitive Science at MIT advised by Josh Tenenbaum where he was a NSF and Hertz Foundation Fellow. His thesis won the 2019 Robert J. Glushko Prize for Outstanding Doctoral Dissertation in Cognitive Science. He also won best paper at RLDM 2017 for models of human cooperation and the William James Award at SPP for computational work on moral learning. Max serves as Chief Scientist of Diffeo a startup building collaborative machine intelligence. Previously, he was a Fulbright Fellow in Beijing, earned an MSc in Statistics as a Marshall Scholar at Oxford, and did his undergraduate work at Stanford as a Goldwater Scholar.