News
The Dean’s Dialogue conversation on aligning AI with human values (Eliza Grinnell/SEAS)
Artificial intelligence (AI) is already changing countless elements of our daily lives. AI-based research and management tools have altered the way we work; generative AI has changed how we experience and consume media and art; even the data centers that enable cloud-based AI have impacted the power and water resource availability of surrounding communities.
AI isn’t going away. So it’s essential that as it continues to integrate into daily life, it does so in alignment with human needs and values.
That was the topic of the recent Dean’s Dialogue conversation at the Science and Engineering Complex of the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). Organized by the SEAS Office for Belonging, Engagement, and Community, the Dean’s Dialogue brings together faculty and industry leaders to explore engineering and technology topics that are extremely relevant to the present day. This was the second annual Dean’s Dialogue, following a March 2025 conversation about the sustainability of electric vehicle manufacturing.
Flavio du Pin Calmon, Thomas D. Cabot Associate Professor of Electrical Engineering (Eliza Grinnell/SEAS)
“The thing about challenges that face society as a whole is that everyone has an equal stake in the problem, which means everyone should have equal access to shape the solution,” SEAS Dean David Parkes said in his introductory remarks. “This is what we mean when we talk about pluralism at SEAS. Solving these enormous, complex problems requires a type of engagement and dialogue across differences that is active, open, and humble. We have to be able to listen, question and examine a problem from all angles, and this is what allows us to tackle these societal challenges. The goal of the dialogue series is to present different viewpoints and help our community better understand the complexity of the problems that confront us.”
This year’s Dean’s Dialogue was led by Ariel Procaccia, Alfred and Rebecca Lin Professor of Computer Science. The panel featured Boaz Barak, Catalyst Professor of Computer Science; Flavio du Pin Calmon, Thomas D. Cabot Associate Professor of Electrical Engineering; Bailey Flanigan, Assistant Professor of Political Science and Computer Science at MIT; and Smitha Milli, research scientist at Meta Superintelligence Labs.
Panelists covered a range of topics related to the future of AI. We’ve excerpted parts of the conversation below.
WHAT IS AI ALIGNMENT?
CALMON: “We want that when these machines are performing some task in place of a human, they satisfy more or less the same expectations that we would have from a human performing the same task. An even more simplified view of it would be that we want AI to do what we want it to do and the way we want it to do it.”
FLANIGAN: “AI alignment is a tricky question because I don't think that it is a monolith. One category of examples is when AI is being privately built for a public experience, such as chatbots, recommender systems, or self-driving cars. Another really different class of AI alignment problems that people have been interested in are for public decision systems, possibly maybe replacing street-level bureaucracy. This could include how we allocate housing, do kidney matching or do risk assessment in courts. These situations have really different governance contexts, different stakes, and different expectations when it comes to who is really entitled to a say and what happens.”
SHOULD AI ALIGNMENT BE DEFINED BY EXPERTS, OR PUBLIC OPINION?
Boaz Barak, Catalyst Professor of Computer Science (Eliza Grinnell/SEAS)
MILLI: “I definitely don't think we should align it to a specific philosophical theory that is not really robust for the real world, but actually getting public input for a lot of topics is very difficult because the public has not had the time to think about a lot of topics. There's this tension between the large scale of data you need for aligning AI, and the intense amount of effort needed to really get quality signals about the public's true values.
BARAK: “I don't think that AI necessarily should always go with the general moral sentiment, but it definitely should guide it and it shouldn't adhere too much to a rigid, rigid ethical framework. AI should have a sense of ethics, which sometimes means making decisions that the normal person would find unreasonable, but this probably should be the exception rather than the rule. A good model is a courtroom, which has both a judge and a jury of one’s peers. We want judges to use the fact that they are human and have some moral intuition and common sense. But we also want them to follow the law, and sometimes following the law might lead to outcomes that people would find problematic, such as a killer being acquitted due to an illegal search.”
CAN HUMAN VALUES BE RELIABLY LEARNED FROM DATA AT SCALE RATHER THAN EXPLICITLY SPECIFIED?
BARAK: “There is a difference between being able to learn things and having a process that people trust. If you ask an AI model to estimate what people think about a particular ethical dilemma and what would be the distribution of people who prefer one type of solution to another, it knows a lot. The AI that has ingested not just all of the interactions with these sorts of ethical questions and values, but also everything that has been written and talked and said about it. So it could have a pretty good idea of people’s values and wants.
MILLI: “We can't completely get rid of explicit specification, which is the temporal nature of this and keeps humans with agency. If somebody's values change after some time, they should have the agency to be able to explicitly specify that and change and steer the model.”
SHOULD PERSONALIZATION BE CONSTRAINED BY GLOBAL NORMS EVEN WHEN IT CONFLICTS WITH LOCAL CULTURAL VALUES?
Bailey Flanigan, Assistant Professor of Political Science and Computer Science at MIT (Eliza Grinnell/SEAS)
CALMON: “I would think that there are certain basic norms that, from a personal perspective, you would expect an AI to satisfy, like avoiding harm, non-discrimination and so on. But while there are some global norms that might be enforced, the question that we have to ask is who chooses these norms? Will they be selected by a group of engineers in comfortable office parks in the Bay Area, or by some institutional or participatory process? In principle you would have some form of AI version of the universal Declaration of Human Rights, right, but at the same time, who's going to decide that? I think if we can't answer that question, it will be a little bit challenging to discuss enforcement of global norms.
FLANIGAN: “We have solutions for coming up with global rules without being totally morally imperialistic: global treaties, global agreements, global governance bodies where people or entities can opt in and then decide on their own collective norms.This avoids some of the traps of one country deciding are the cultural values that we want this model to espouse. But on the other hand, personalization really calls into question the correct entity that gets to opt into this process. Because in many cases, wars happen on the scale of countries, so countries opt in, but here personalization can happen in the cultural microcosm at the individual level.”
WHAT CAN THE PEOPLE OR GOVERNMENTS REALISTICALLY DO TO HOLD ACCOUNTABLE THE AI COMPANIES DEVELOPING THIS TECHNOLOGY?
Smitha Milli, research scientist at Meta Superintelligence Labs (Eliza Grinnell/SEAS)
MILLI: “Whereas the European Union has focused on making AI specific regulation, the U.S. is taking a sectorial approach and relying on our existing laws. The nice thing about that is that even though the systems have changed, the harms we care about haven't. It's still illegal to discriminate on someone's housing application, even if you’re aided by a large language model.
CALMON: “There will be a need for a regulatory body, because that’s where IT companies can have judicial recourse against. We see this with many other technologies, such as the FCC regulating internet access and so on. There will always be new problems that will arise, and we have to have a space for public input.
FLANIGAN: “The public is really good at articulating what their fears are about what AI could do to society. Any process that involves people in this regulation needs to allow people to be able to articulate these concerns, but also bring experts into the room to help them creatively think of regulatory solutions.”
Topics: AI / Machine Learning, Computer Science
Cutting-edge science delivered direct to your inbox.
Join the Harvard SEAS mailing list.
Scientist Profiles
Boaz Barak
Catalyst Professor of Computer Science
Flavio P. Calmon
Thomas D. Cabot Associate Professor of Electrical Engineering
Ariel Procaccia
Alfred and Rebecca Lin Professor of Computer Science
Press Contact
Matt Goisman | mgoisman@g.harvard.edu