2019 Research

The Center for Research in Computation and Society was founded to develop a new generation of ideas and technologies designed to address some of society’s most vexing problems. The Center brings computer scientists together with economists, psychologists, legal scholars, ethicists, neuroscientists, and other academic colleagues across the University and throughout the world, to address fundamental computational problems that cross disciplines, and to create new technologies informed by societal constraints to address those problems. Research initiatives that have been launched throughout industry and academia study the intersection of technology and society in two distinct manners: they investigate the effects of information technology on society or study ways to use existing technologies to solve societal problems. The approach of the Harvard Center for Research on Computation and Society is unique in its forward-looking scope and integrative approach, supporting research on innovative computer science and technology informed by societal effects, not merely examining the effects of existing technology on society

Areas of specific research interest for the 2020-2021 academic year include conservation and public health. Research in this arena may involve, among myriad other possibilities, the use of AI for protecting endangered wildlife, fisheries, and forests; the use of technology to detect and prevent disease; and public health challenges amongst those experiencing homelessness. Other topics that continue to remain of interest to CRCS include privacy and security; social computing; economics and computation; and ethics and fairness in the application of technological innovation to societal problems.

Joe Futoma

TBD

MAIA JACOBS

Maia Jacobs' research contributes to the fields of Computer Science, Human-Computer Interaction (HCI), and Health Informatics through the design and evaluation of novel computing approaches that provide individuals with timely, relevant, and actionable information. Through her research, she develops tools that offer personalized and adaptive support for chronic disease management and establish the impact of these tools on health management behaviors. She also uses user-centered design and sociotechnical systems research to integrate machine learning models into clinical workflows while addressing issues of usability, safety, and ethics. 

Max Kleiman-Weiner

Max Kleiman-Weiner's interests span the study of intelligence (natural), intelligence (artificial), and intelligence (moral). His goal is to understand how the human mind works with enough precision that it can be implemented on a computer. He also draws on insights from how people learn and think to engineer smarter and more human-like algorithms for artificial intelligence. He primarily researches social intelligence where the scale, scope, and sophistication of humans is distinct in the natural world and where even a kindergartener exceeds the social competence of our best AI systems. 

Melanie Pradier

Melanie Pradier is a Postdoctoral Fellow advised by Finale Doshi-Velez at Harvard University, working on probabilistic models, inference, interpretable machine learning, and healthcare applications. She currently holds a Harvard Data Science Initiative fellowship, co-funded by the Center for Research on Computation and Society (CRCS) institute. Pradier's research lies at the intersection of impactful healthcare applications and probabilistic machine learning. Before arriving at Harvard, she worked on Bayesian Nonparametrics for data exploration in applications with societal impact, including sport sciences, economics and cancer research. More recently, she is interested in personalizing mental healthcare and advances in Bayesian neural networks. She wants her research to have an impact in the world, for which she collaborates regularly with psychiatrists at Massachusetts General Hospital, and experts in Human-Computer-Interaction. Some of the technical questions she is trying to answer: How can we better quantify uncertainty (when should you trust your model)? How can we design expressive, interpretable priors in Bayesian models? How can we combine human knowledge with data-driven evidence?

Pragya Sur

Pragya Sur is interested in developing robust applicable statistical procedures for high-dimensional data, with rigorous theoretical guarantees, under minimal assumptions. Her graduate research focused on the study of likelihood based inference in high-dimensional generalized linear models. In particular, it uncovered that for a class of generalized linear models, classical maximum likelihood theory fails to hold in a high-dimensional setup. Consequently, p-values/confidence intervals obtained from standard statistical software packages are often unreliable. In a series of papers, she has studied several aspects of this phenomenon and has developed a modern maximum-likelihood theory suitable for certain kinds of high-dimensional data. She is simultaneously involved in research on the different definitional aspects of algorithmic fairness, their connections and limitations;  in particular, she is interested in developing a rigorous understanding of the connections between metric based fairness and fairness notions originating from the perspective of causal inference. 

Javier Zazo

Javier Zazo is a postdoctoral fellow at Harvard University, working on computational machine learning and representation learning. His current research focuses on finding interpretable signal representations via signal processing, optimization and machine learning methods. He collaborates in the LINKS-LAB and CRISP research groups. A few of the projects he is currently working on are metric learning using Earth Mover's Distance (EMD), and learning hierarchical sparse coding models. The first project learns a distance metric between probability distributions from labeled data, and can be applied to characterize fragrance and flavor product spaces. The second project learns a hierarchical sparse representation of signals at different scales, and has easily interpretable properties. Such representations can be applied to analyze neurological responses, and are expected to be used for texture synthesis, style transfer and image segmentation.

Shahin Jabbari

Shahin Jabbari

My research studies the interactions between machine learning and a variety of contexts, ranging from crowdsourcing to game theory, and algorithmic fairness. Recently, I have focused on understanding the ethical aspects of algorithmic decision making in the public health domain.
My recent focus since joining the CRCS in the last fall has been on understanding the ethical challenges in networks based problems such as societal interventions for suicide and Tuberculosis. Even more recently, and motivated by the on-going pandemic, I have been studying agent based models to understand the dynamics of the spread of COVID-19 as well as applying game theory and reinforcement learning to derive policies that can be used in the context.

Sarah Keren

Sarah Keren

The overall objective of my research is to develop methods for principled design that promotes effective multi-agent collaboration and enhances the way autonomous virtual agents and robots interact with each other and with humans. To accomplish this objective, the focus of my work in the area of artificial intelligence (AI) is on multi-agent environment design which involves taking into account the constraints, limitations, and capabilities of the different agents in an AI system, and finding the best way to design their environment so that it complements the agents’ capabilities and compensates for their limitations.
My focus at CRCS is on environments with multiple self-interested agents that share a set of limited resources. The objective is to use different AI tools, such as automated planning, reinforcement learning, and game theory, to understand why specific behaviors emerge in such settings and to find the best way to change the environment in order to promote an effective collaboration between the agents. To evaluate our approach we are using multi-robot domains and sequential social dilemma settings, where we are using automated design to promote sustainable and socially aware behaviors of the agents in the system.

Andrew Perrault

Andrew Perrault

My primary interest is in linking predictive modeling with decision-making tasks. It is often the case that critical data is missing at the time decisions must be made. My work aims to trade off the cost of gathering data with decision quality and maximize the use of data that has already been collected. At CRCS, I have worked on several problems that aim to have direct real-world impact, in conservation—defending protected areas from poaching—and public health—monitoring treatment adherence and transmission dynamics in tuberculosis and COVID-19.

Nir Rosenfeld

TBD

Berk Ustun

Berk Ustun’s research combines machine learning, optimization, and human-centered design. He develops methods to promote the adoption and responsible use of machine learning in domains like medicine, consumer finance, and criminal justice. He addresses pressing issues such as fairness and interpretability by developing theory and algorithms in collaborations with experts in multiple disciplines. He developed methods to learn simple machine learning models that are now adopted in major healthcare applications such as ICU seizure prediction, hospital readmissions prediction, and adult ADHD screening. He also developed methods to address pervasive issues in the fairness and accountability of machine learning systems in medicine and consumer finance. His work was published at top venues in machine learning and other disciplines (e.g., JMLR, ICML, FAT), covered in the media (including Wired & NPR), and won major awards (e.g., the INFORMS IAAA Awards in 2019).