News

How Can Bias Be Removed from Artificial Intelligence-Powered Hiring Platforms?

Harvard-led institute to pursue fairness in online systems

Cynthia Dwork

Cynthia Dwork, Gordon McKay Professor of Computer Science

SUNNYVALE, CALIF. – A Harvard-led group of experts from academia and industry will gather here this week to launch a new research initiative aimed at promoting greater fairness in artificial intelligence-powered hiring platforms.

Today, 70% of companies use automated applicant tracking systems to find and hire talent, according to industry estimates. However, many of the algorithms used by recruiters to manage their hiring process have been shown to reproduce, and sometime amplify, biases and human errors they are supposed to eliminate.

The new effort, the Hire Aspirations Institute, brings together more than a dozen leaders in algorithmic fairness, privacy, AI, law, critical race theory, organizational behavior, economics, and social networking. The researchers are from Harvard, Cornell, Princeton, University of Chicago, Boston University, MIT, Weizmann Institute, Northeastern University, Rutgers, Microsoft Research and Apple. It will be led by Cynthia Dwork, Gordon McKay Professor of Computer Science at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS).

“Hiring is a crucible in which forces of preference, privilege, prejudice, law, and now, algorithms and data, interact to shape an individual’s future,” said Dwork, who has made pioneering contributions to private data analysis, cryptography, distributed computing, and the theory of algorithmic fairness. “We will investigate pathways to minimize the transfer of persistent patterns of hiring bias and discrimination onto electronic platforms, from data and algorithms to corrective transformations and law.”

Dwork’s research has demonstrated that systems composed of elements that are “fair” in isolation are not necessarily fair overall. The Hire Aspirations Institute’s multi-disciplinary team will undertake a holistic research agenda to identify concrete solutions to real-world problems across several areas of focus:

  • Poorly worded job descriptions can lead to the exclusion of qualified Job candidates, and applicants’ own language can contain cultural biases. The initiative will explore AI and other techniques hiring platforms could employ to enable even-handed comparisons between candidates.
  • Algorithms deployed by hiring platforms to score and rank candidates often exacerbate existing unfairness. The researchers will pursue algorithmic approaches that nullify that effect and improve their overall performance.
  • Prediction algorithms assign scores intended to calibrate an individual job seeker’s likelihood of success in a role if hired. Drawing on the theory of pseudo-randomness, the team will interrogate approaches to ensure that prediction models are as accurate as possible.
  • Success in a job depends on the workplace, and the nature of a workplace is, in turn, influenced by the people who work there. The group will explore evidence-based techniques for modeling a more equitable workplace environment.
  • In-person and digital networking and referrals play an oversized role in how people get jobs. But these informal structures reinforce inequality. The researchers will test ideas for how hiring platforms can level the networking playing field.
  • The institute will identify legal frameworks that, in tandem with the technical work, can advance the fairness of hiring platforms.

To kick off the effort, the researchers will convene at the offices of LinkedIn. With 930 million members worldwide, LinkedIn is the world’s most widely used hiring platform and professional network. It’s vision to “create economic opportunity for every member of the global workforce” and its ongoing efforts to grapple with the meaning and operationalization of fairness, prepare the ground for a fruitful exchange of ideas and insights.

The Hire Aspirations Institute is supported in part by grants from the Sloan Foundation, Simons Foundation, and Harvard University. LinkedIn will supply data to assist with the research.

Topics: AI / Machine Learning, Computer Science, Ethics

Scientist Profiles