The movement toward AI allows predictive models to weigh in on hiring decisions, but the proprietary nature of such algorithms shrouds them in mystery, making it exceedingly difficult to determine if and when bias and discrimination unintentionally skew the results, warned Alex Engler, Brookings Rubenstein Fellow, adjunct professor and affiliated scholar at Georgetown University and senior consulting data scientist at social policy research nonprofit MDRC.
According to Engler and Kimberly Houser – University of North Texas clinical assistant professor of business law and the law of emerging technologies – who joined him in the talk, the growing use of such tools can and should be a wakeup call to legislators to update fairness in hiring rules for a tech-infused era.
WHEN AI MET HR
AI technology is being used by both public- and private-sector players to assist with recruiting and evaluating prospective hires, such as by helping match jobseekers to relevant open positions or assessing candidates’ video interviews.
The data-crunching power of automated systems means they can process a vast number of resumes more quickly than HR staff could. That kind of speed might theoretically be tempting to e-commerce firms seeking to ramp up hiring to meet pandemic-driven demand or restaurants staffing up as they anticipate vaccine rollouts bringing a resurgence in consumer demand. Such tools are actively being deployed, with a 2020 global study from HR consulting services provider Mercer finding that 41 percent of HR professionals already use AI algorithms to analyze public data to identify promising job candidates and 30 percent use algorithms during recruitment to screen and evaluate the prospective new hires.
With 25 percent of U.S. adults saying in August 2020 that they or a household member had lost a job because of the public health crisis, government also may be eager to use any tools available that could help them get residents back on their feet and reduce demand on unemployment benefits. Last month saw Indiana debuting an AI-powered platform intended to help jobseekers discover openings and skill training opportunities, for example.
HOW AI GETS BIASED
Not all AI solutions live up to the hopes, however. These systems can analyze vast quantities of data to detect patterns and learn what details are likely to correlate with desired results, but their calculations often depend on the choices developers make when telling them what data to analyze. That developer input can inject long-running inequities into the process.
In 2014, Amazon attempted to create software to evaluate candidates for technology roles, with the system analyzing a decade’s worth of resumes to inform its decisions. But most of those resumes had been submitted by men, leading the system to learn that male resumes were preferred and actively filter out applications that included the word “women.”
Even attempts to prevent predictive algorithms from assessing protected characteristics can fall short, with the systems learning proxies. One might imagine that the listing of women’s organizations or historically black colleges and universities could indicate gender and race even on applications that do not state such demographic information explicitly.
Efforts to reduce AI bias include adjusting data sets to provide AI systems with a more balanced look. Amazon might have ensured that it only presented its software with an equal number of male and female resumes, such as by reducing the number of male resumes to match the quantity of female resumes, Houser suggested. Still, it’s hard to know exactly how to correct a system without seeing the algorithms.
OPAQUE DECISION-MAKING
There is currently little oversight into how predictive algorithms arrive at their recommendations of who to interview or hire and whether the methods are fair, said Engler. And AI systems are not necessarily blind to race or disability – those that evaluate interview videos tend to assess applicants’ vocal cadence and facial expressions in addition to the content of their answers, he said.
Mercer’s study found only 67 percent of global respondents were sure they could prevent the AI and automated systems from ingraining biases. Given that an automated system can assess far more candidates than a human could do manually, the ripple effects of any uncaught prejudices are more extensive.
“There’s so many different algorithms playing a part in this [hiring] process that if there are even small issues – much less large ones – they might proliferate out and build on one another,” Engler said.
Rejected job applicants can take legal action if employers explicitly discriminate based on gender, race or other protected characteristics or use hiring practices that – intentionally or not – both result in disparate impact and could be avoided without impeding the businesses’ core functions. But making such as case against a secretive algorithm is exceedingly difficult, Houser said.
“All the employer just has to do is say, ‘Well, it’s a legitimate business decision to use the algorithm, and that’s their defense. And then the burden shifts back, and you [the applicant] have to prove a better way to do it,” Houser said. “Well, if you don’t have a computer science background and have no access to their algorithm, how could you possibly show there’s a better, less discriminatory way to choose people?”
AUDITING THE AI
Relying solely on human professionals may not be a perfect solution, as people are prone to prejudice and unconscious bias. Some organizations turn to AI not only to accelerate finding and vetting candidates but also in a quest for more impartial decision-making. The tools can be designed to obscure applicant characteristics seen as irrelevant to the actual skills demanded by the roles and avoid considering certain protected demographic details when evaluating potential candidates, for example – something that Indiana’s AI solution provider Eightfold AI says it does.
But the proprietary nature of many hiring algorithms can mean that employers lack full knowledge of which factors are being considered and how the automated systems are weighing them to arrive at interviewing and hiring recommendations.
“It’s not going to be very obvious to most employers contracting with these vendors which ones are really taking [fairness] seriously,” Engler said.
Solution providers may lack the financial incentive to dig deep in to investigating and fixing possible equity problems, however, given that testing and modifying systems to be more impartial takes time that their data scientists and developers could otherwise be spending on product development that more overtly drive profit, he said. Another challenge is that auditors contracted by the solution providers can have conflicts of interest.
Effectively holding AI systems accountable depends on independent third parties accessing the data and models themselves, which companies are often reluctant to give up, Engler said.
“A third party has to come in and really get to the internals of how a company operates,” he said. “You cannot just talk to a developer.”
WHY WE HAVE GOVERNMENT
Public policy can take over to enforce equity when business incentives fall short, and some state and local governments have recently turned attention to the issue.
January 2020 saw Illinois debut a law requiring companies to inform candidates if AI would be used to evaluate their video interviews and to tell them the “general types of characteristics” the automated system considered, and Washington state legislators filed a similar bill in 2020. New York City is also considering policy that would compel providers of hiring automation software to perform annual bias audits on their offerings.
There is also potential for the federal government to get involved in more heavily reviewing and responding to possible discrimination in AI-powered hiring. Bias is always a risk in hiring practices, whether the processes are conducted by humans or by human-created, but careful monitoring, and the right laws and enforcement efforts could help employers and public officials work to increasingly narrow its influence.