Join us from August 24 through December 7 at the AI for Social Impact Seminar Series. This seminar series will explore how artificial intelligence can contribute to solving social problems.
Artificial intelligence is poised to play an increasingly large role in societies across the world. Accordingly, there is a growing interest in ensuring that AI is used in a responsible and beneficial manner. A range of perspectives and contributions are needed, spanning the full spectrum from fundamental research to sustained deployments.
This seminar series will explore how artificial intelligence can contribute to solving social problems. For example, what role can AI play in promoting health, access to opportunity, and sustainable development? How can AI initiatives be deployed in an ethical, inclusive, and accountable manner?
Talk recording: https://youtu.be/ZjbfxEPJW_Y.
Stevie Chancellor (Northwestern University)
Human-Centered Machine Learning for Dangerous Mental Health Behaviors Online
Abstract: Research and industry use machine learning to identify and intervene in physically dangerous behaviors discussed on social media, such as advocating for self-injury or violence. There is an urgent need to innovate in data-driven systems to handle the volume and risk of this content in social networks and its propagation to others in the community. However, traditional approaches to prediction have mixed success, in part because technical solutions oversimplify complex behavior and the unique interactions of dangerous communities with both individuals and platforms. The difficulties in computationally handling these circumstances threatens the applications of these techniques to pressing social problems.
In this talk, I will describe my work in human-centered machine learning, an approach that refocuses technological innovation on the needs of humans, communities, and stakeholders. I study this through dangerous mental illness behaviors in online communities, like opioid abuse, suicidal ideation, and promoting eating disorders. First, I will talk about my work in building novel and human-centered prediction systems that make robust and accurate assessments of mental illness signals across several conditions. Then, I will discuss recent research on a crucial part of machine learning pipelines - generating labels for training data. I have found alarming gaps in construct validity and rigor that jeopardize the state-of-the-art – and I’ll discuss our current work on how we’re attempting to fix this. Together, these inform an agenda for human-centered machine learning that is scientifically rigorous and more considerate of social contexts in data, providing a pathway for more impactful and ethical problem solving in computer science.
Bio: Dr. Stevie Chancellor is the CS + X Postdoctoral Fellow in Computer Science at Northwestern University. Her research combines approaches from human-computer interaction and machine learning to build and critically evaluate human-centered systems, focusing on high-risk health behaviors in online communities. Her work has been featured in The Atlantic, Wired and Gizmodo. Stevie recently received her doctorate in Human-Centered Computing from Georgia Tech, and will start as an Assistant Professor in Computer Science and Engineering at the University of Minnesota in 2021.