Research

The Center for Research in Computation and Society was founded to develop a new generation of ideas and technologies designed to address some of society’s most vexing problems. The Center brings computer scientists together with economists, psychologists, legal scholars, ethicists, neuroscientists, and other academic colleagues across the University and throughout the world, to address fundamental computational problems that cross disciplines, and to create new technologies informed by societal constraints to address those problems. Research initiatives that have been launched throughout industry and academia study the intersection of technology and society in two distinct manners: they investigate the effects of information technology on society or study ways to use existing technologies to solve societal problems. The approach of the Harvard Center for Research on Computation and Society is unique in its forward-looking scope and integrative approach, supporting research on innovative computer science and technology informed by societal effects, not merely examining the effects of existing technology on society

Areas of specific research interest for the 2020-2021 academic year include conservation and public health. Research in this arena may involve, among myriad other possibilities, the use of AI for protecting endangered wildlife, fisheries, and forests; the use of technology to detect and prevent disease; and public health challenges amongst those experiencing homelessness. Other topics that continue to remain of interest to CRCS include privacy and security; social computing; economics and computation; and ethics and fairness in the application of technological innovation to societal problems.

Joe Futoma

TBD

MAIA JACOBS

Maia Jacobs' research contributes to the fields of Computer Science, Human-Computer Interaction (HCI), and Health Informatics through the design and evaluation of novel computing approaches that provide individuals with timely, relevant, and actionable information. Through her research, she develops tools that offer personalized and adaptive support for chronic disease management and establish the impact of these tools on health management behaviors. She also uses user-centered design and sociotechnical systems research to integrate machine learning models into clinical workflows while addressing issues of usability, safety, and ethics. 

Max Kleiman-Weiner

Max Kleiman-Weiner's interests span the study of intelligence (natural), intelligence (artificial), and intelligence (moral). His goal is to understand how the human mind works with enough precision that it can be implemented on a computer. He also draws on insights from how people learn and think to engineer smarter and more human-like algorithms for artificial intelligence. He primarily researches social intelligence where the scale, scope, and sophistication of humans is distinct in the natural world and where even a kindergartener exceeds the social competence of our best AI systems. 

Melanie Pradier

Melanie Pradier is a Postdoctoral Fellow advised by Finale Doshi-Velez at Harvard University, working on probabilistic models, inference, interpretable machine learning, and healthcare applications. She currently holds a Harvard Data Science Initiative fellowship, co-funded by the Center for Research on Computation and Society (CRCS) institute. Pradier's research lies at the intersection of impactful healthcare applications and probabilistic machine learning. Before arriving at Harvard, she worked on Bayesian Nonparametrics for data exploration in applications with societal impact, including sport sciences, economics and cancer research. More recently, she is interested in personalizing mental healthcare and advances in Bayesian neural networks. She wants her research to have an impact in the world, for which she collaborates regularly with psychiatrists at Massachusetts General Hospital, and experts in Human-Computer-Interaction. Some of the technical questions she is trying to answer: How can we better quantify uncertainty (when should you trust your model)? How can we design expressive, interpretable priors in Bayesian models? How can we combine human knowledge with data-driven evidence?

Pragya Sur

Pragya Sur is interested in developing robust applicable statistical procedures for high-dimensional data, with rigorous theoretical guarantees, under minimal assumptions. Her graduate research focused on the study of likelihood based inference in high-dimensional generalized linear models. In particular, it uncovered that for a class of generalized linear models, classical maximum likelihood theory fails to hold in a high-dimensional setup. Consequently, p-values/confidence intervals obtained from standard statistical software packages are often unreliable. In a series of papers, she has studied several aspects of this phenomenon and has developed a modern maximum-likelihood theory suitable for certain kinds of high-dimensional data. She is simultaneously involved in research on the different definitional aspects of algorithmic fairness, their connections and limitations;  in particular, she is interested in developing a rigorous understanding of the connections between metric based fairness and fairness notions originating from the perspective of causal inference. 

Javier Zazo

Javier Zazo is a postdoctoral fellow at Harvard University, working on computational machine learning and representation learning. His current research focuses on finding interpretable signal representations via signal processing, optimization and machine learning methods. He collaborates in the LINKS-LAB and CRISP research groups. A few of the projects he is currently working on are metric learning using Earth Mover's Distance (EMD), and learning hierarchical sparse coding models. The first project learns a distance metric between probability distributions from labeled data, and can be applied to characterize fragrance and flavor product spaces. The second project learns a hierarchical sparse representation of signals at different scales, and has easily interpretable properties. Such representations can be applied to analyze neurological responses, and are expected to be used for texture synthesis, style transfer and image segmentation.

Shahin Jabbari

Shahin Jabbari's research studies the interactions between machine learning and a variety of contexts, ranging from crowdsourcing to game theory, and algorithmic fairness. Recently, Shahin has focused on understanding the ethical aspects of algorithmic decision making in the public health domain.

Sarah Keren

Sarah Keren is developing a Utility Maximizing Design (UMD) framework, addressing the challenge of how automated agents and humans interact and collaborate with one another. In all these settings, effectively recognizing what users try to achieve, providing relevant assistance (or, depending on the application, taking relevant preventive measures), and supporting an effective collaboration in the system are essential tasks. All these tasks can be enhanced via efficient system redesign, and often even subtle changes can yield great benefits. However, since these systems are typically complex, hand crafting good design solutions is hard. Sarah Keren’s work automates the design process by offering informed search strategies to automatically and efficiently find optimal design solutions for maximizing a variety of targeted objectives. Since arriving at Harvard, she has been working on three settings within this framework. The first is Goal Recognition Design (GRD) where she seeks a redesign to an environment that minimizes the maximal progress an agent can make before its goal is revealed. The second is Equi-Reward Utility Maximizing Design (ER-UMD), which seeks to maximize the performance of a planning agent in a stochastic environment. The third, Reinforcement Learning Design (RLD), parallels ER-UMD, but considers an environment with a reinforcement learning agent. Among the different ways to change a setting, she is now focused on information shaping, which consists of selectively revealing information to a partially informed agent in order to maximize its performance. As an example application, she is developing robotic applications that demonstrate how information shaping can be applied to enhance the performance of a robot that is navigating in an unfamiliar environment.

Andrew Perrault

In many real-world interactions between strategic agents, decision theoretic reasoning is complicated by a lack of understanding of the preferences or utility functions of the agents in the interaction. If we could solve the learning problem optimally, the models could be plugged in to well-developed tools from game theory. However, learning perfect models is rarely possible, requiring the learned model to reflect its use in strategic reasoning. Andrew Perrault's research focuses on this problem and others that arise when applying multi-agent systems to problems of societal impact. Perrault's graduate research studied the problem of developing and coordinating consumer-representing agents to improve the efficiency of the electricity delivery systems. His work at CRCS is directed towards AI for social impact, such as the problem of planning ranger patrols to interdict poachers in wildlife sanctuaries.

Nir Rosenfeld

TBD

Berk Ustun

Berk Ustun’s research combines machine learning, optimization, and human-centered design. He develops methods to promote the adoption and responsible use of machine learning in domains like medicine, consumer finance, and criminal justice. He addresses pressing issues such as fairness and interpretability by developing theory and algorithms in collaborations with experts in multiple disciplines. He developed methods to learn simple machine learning models that are now adopted in major healthcare applications such as ICU seizure prediction, hospital readmissions prediction, and adult ADHD screening. He also developed methods to address pervasive issues in the fairness and accountability of machine learning systems in medicine and consumer finance. His work was published at top venues in machine learning and other disciplines (e.g., JMLR, ICML, FAT), covered in the media (including Wired & NPR), and won major awards (e.g., the INFORMS IAAA Awards in 2019).