News

Doing social media justice

Student project explores the use of digital juries for content moderation

Jenny Fan

For her master in design engineering (MDE) thesis project, Jenny Fan studied the use of digital juries to make social media content moderation decisions. (Photo provided by Jenny Fan)

Hate speech and misinformation on social media can spread like wildfire unless they are identified and removed quickly by content moderators. 

But whether tech companies are paying humans or training algorithms, content moderation typically occurs in a black box, leaving users in the dark about why some posts are removed while others are allowed to remain online.

A digital jury of users, empowered to make content moderation decisions, could bring some democratic legitimacy to this increasingly important process, said Jenny Fan, who earned a master’s in design engineering (MDE) from the Harvard John A. Paulson School of Engineering and Applied Sciences and Graduate School of Design in 2019.

“We are experiencing a huge ‘techlash’ right now. And the current methods of content moderation are not doing much to build public trust,” said Fan. “I wondered how we could conceive of new models of how the governance system can relate to society.”

Fan, who has had an interest in the spread of fake news since the last presidential election, explored how a digital jury could be used for content moderation as her MDE thesis project.

She conducted a wide-ranging literature review, drawing upon hundreds of years of research on constitutional juries, as well as the latest findings on swarms, collective intelligence, and online social computing, to develop a digital jury framework.

Then she recruited participants using the crowdsourcing site Amazon Mechanical Turk. Assignng individuals to 20 juries of six people, Fan used test cases to see how a digital jury would tackle some very fraught and subjective issues.

We are experiencing a huge ‘techlash’ right now. And the current methods of content moderation are not doing much to build public trust.

Jenny Fan, M.D.E. '19

The cases, written by Fan, were based on real examples of content moderation. For instance, in one case jurors see an anti-Semitic version of Pepe the Frog standing in front of the World Trade Center towers. A social media user had shared the photo in a post with the letters “LOL.”

“This is a fairly complicated and nuanced case, since all the user actually wrote was ‘LOL,’” she said. “And this is the kind of thing that paid moderators have a lot of trouble with, since there is so much cultural nuance to the meme.”

In the control condition of the interface, jurors were shown the image and the content moderation decision, but had no say in how the decisions were made. A second condition gave some background on why the case was flagged and then invited jurors to vote on punitive action against the user (warn, ban, or permanently ban) and what to do about the content (hide it, remove it, or report it to the authorities.) The third condition placed jurors in a chat room and gave them four minutes to discuss each case anonymously before voting individually on the punitive action.

Jurors were asked questions about the fairness and effectiveness of the process in each condition, and whether they trusted the results. Ultimately, participants agreed that digital juries are more procedurally just than the status quo.

“And that makes sense,” Fan said. “In the current system where a social media company just uses AI and takes care of content moderation for you, it is faster, more effective, but there is less involvement. It may be faster, but it definitely doesn’t have as much democratic legitimacy as using a jury.”

Jenny Fan presenting

Fan presents her project to MDE program classmates and faculty. (Photo provided by Jenny Fan)

Fan was surprised by how smoothly the digital juries operated, especially in an era of rampant mistrust online. The biggest challenge she faced was getting jurors to speak up in the chat room, but she found that posting an initial “seed question” was an effective way to kick off the conversations.

Another surprise for Fan: in two of the 20 juries, one individual changed their mind based on the chat room discussions.

“I was pleased that the jurors had real conversations. Some people had really long, intensive discussions and raised a lot of interesting points,” she said. “Many people had very different opinions for why they did and didn’t trust these social media platforms anymore.”

Fan and her co-author, Amy Zhang, Assistant Professor at the University of Washington Allen School of Computer Science and Engineering, will present their findings at the Association for Computing Machinery Conference on Human Factors in Computing in April. Now working full-time at software company Palantir Technologies, Fan hopes to keep up with this research as much as she can, and is collaborating with Zhang, who advised the project as a Ph.D. student at MIT, and a team at Stanford University to move the research forward.

In the future, Fan would like to work directly with a social networking company to conduct a more robust study with real-world cases and a larger sample size.

“This is a problem that keeps me up at night. I am thinking about it and reading about it constantly,” she said. “I’m passionate about finding better ways to design our online social spaces, and I would like to keep contributing and make a difference in this space. I want to keep pushing this idea of how can we imagine new models for our relationships with these platforms.”

Topics: Design, Ethics, Technology

Press Contact

Adam Zewe | 617-496-5878 | azewe@seas.harvard.edu