News

Harvard Researchers Part of NSF AI Institute for Societal Decision Making

$20M Collaboration Brings Together AI Researchers, Social Scientists to Develop Tools for Societal Challenges

By Aaron Aupperlee, Senior Director of Media Relations, School of Computer Science at Carnegie Mellon University | Press contact

A new multi-institution institute will work to improve the response to societal challenges such as disaster management and public health by creating human-centric AI tools to assist with critical decisions. The AI Institute for Societal Decision Making (AI-SDM) will also develop interdisciplinary training to bolster effective and rapid response in uncertain and dynamic situations.

AI-SDM will bring together experts from Carnegie Mellon University, Harvard, Boston Children’s Hospital, Howard University, Penn State University, Texas A&M University, the University of Washington, the MITRE Corporation, Navajo Technical University, and Winchester Thurston School. This diverse group of researchers and practitioners will work with public health departments, emergency management agencies, nonprofits, companies, hospitals and health clinics to enhance decision-making.

A five-year, $20 million commitment from the will support the institute, one of seven AI institutes announced today by NSF.

By bringing together AI and social science researchers, AI-SDM will enable data-driven, robust, resource-efficient decisions and improve outcomes by accounting for human factors that are key to acceptance of these decisions in the field, such as biases, perception of risk, trust and equity. AI-SDM aims to leverage AI to better understand human decision-making; to improve the ability of AI to make decisions; and to apply those advances to create better, more trusted choices.

“The best applications of artificial intelligence in societal domains will come when we not only advance AI for decision-making, but also better understand human decision-making, and when we can bring the two together,” said Aarti Singh, a professor in CMU’s Machine Learning Department, who will serve as the institute’s director. “Social scientists are studying human behavior. Machine learning researchers are developing new AI tools to aid decision-making. We really need to bridge the gap and have social scientists and AI researchers collaborate to come up with solutions that will leverage AI capability while ensuring social acceptance.”

The initiative will undertake several foundational thrusts. Cognitive and behavioral scientists will develop computational models to accurately represent how and why humans make the decisions they do in times of crisis. Predicting human choices is key to developing better AI tools and ensuring their success in society. This work will be led by Cleotilde Gonzalez, a research professor in CMU’s Department of Social and Decision Sciences, and Christopher Dancy, an associate professor in the Penn State College of Engineering.

Social scientists and AI researchers will work together to understand human-AI complementarity and create models of group and hybrid human-AI decision-making. This will also generate understanding of how social values such as equity, ethics and risk influence individual and group choices. Leading this work will be Ariel Procaccia, Gordan McKay Professor of Computer Science at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), and Aaditya Ramdas, an assistant professor in CMU’s Departments of Statistics and Data Science and Machine Learning.

AI researchers in the institute will develop tools capable of making autonomous decisions that will support people in both disaster and public health management. They will have to work in dynamic and uncertain environments and under intense pressure and constraints. They will have to juggle competing objectives with incomplete information and coordinate with many people using imperfect communication, which is a mighty task for current AI. This work will be led by Sham Kakade, Gordon McKay Professor of Computer Science at SEAS and Professor of Statistics, and Jeff Schneider, a research professor in CMU’s Robotics Institute.

The AI tools created by AI-SDM will not only assist decision-makers with tasks at hand but will also help them reflect on past actions and evaluate decisions not taken. If an emergency manager had directed resources to or a public health official targeted interventions at one location instead of another, would the result have been different? Tools that can model or simulate these scenarios will help make better decisions. Such counterfactual and causal reasoning are key to explainable AI that can be trusted. Kun Zhang, an associate professor in CMU’s Department of Philosophy, will lead this effort.

AI-SDM will deploy its work in the field alongside experts in public health and disaster management. One area of focus will be to help public health officials and emergency managers equitably allocate resources like health workers, vaccines, tests, treatment options, emergency aid, shelter, food and rescue efforts during a disaster or health crisis. Maia Majumder, an assistant professor in the Computational Health Informatics Program at Harvard Medical School and Boston Children’s Hospital, and Robin Murphy, a professor in computer science and engineering at Texas A&M University will lead these efforts.

The institute will also develop tools that will help make timely interventions in public health and disaster management. This could be messaging to stop the spread of infectious diseases or improve maternal health, or communication efforts during a disaster to direct people to safety and aid. This effort will be led by Gretchen Chapman, a professor in the Department of Social and Decision Sciences at CMU, and Terri Adams-Fuller, a professor in the Department of Sociology and Criminology at Howard University.

Supporting the tools developed by AI-SDM will be research into how to improve the acceptance of AI-assisted decision-making by both people tasked with making choices and the public. Support for AI-enabled decisions depends on many controllable and uncontrollable factors such as ethics, risk, equity and explainability. Doubt in any of these can hamper adoption. This work will be led by Paul Lehner, the chief engineer of the Information Technical Division at the MITRE Corporation.

The impact of AI-SDM will be realized through work with various government health departments, emergency management agencies, companies and nonprofit organizations located in the U.S. and abroad. Engagement will include surveys and virtual exercises with emergency managers and public health officials to learn how they make decisions, pilot deployments in the field, and technical and personnel exchanges.

This work will be paired with education and workforce development. AI-SDM’s goal of widespread adoption of AI-enabled decisions cannot happen without creating a workforce trained on developing and using human-centric AI tools, and educating the public so they are aware and understand AI’s complementary role and shortcomings. These efforts will include a collaboration with Winchester Thurston School to create professional development workshops for high school educators, enrichment and leadership activities for underrepresented students, interdisciplinary degrees and courses, curriculum co-design with community colleges and educational partners such as Navajo Tech, workforce training, upskilling, and public engagement activities.

Other Harvard faculty participating in the institute are Milind Tambe, Gordon McKay Professor of Computer Science at SEAS; and Christopher Golden, Assistant Professor of Nutrition and Planetary Health at the Harvard T.H. Chan School of Public Health.

More information about AI-SDM is available on its website. Details about the other AI institutes are available on the NSF’s Science Blog.

Topics: AI / Machine Learning, Computer Science

Scientist Profiles

Sham Kakade

Gordon McKay Professor of Computer Science and Professor of Statistics

Press Contact

Paul Karoff