The regulation, Local Law 144, which goes into effect in the coming months, sets up a framework that would require HR departments to test their AI recruitment tools for bias, and defines situations when companies must tell applicants they’re using the tools. But there’s a big problem, according to advocates: Local Law 144 has been watered down and riddled with loopholes—setting a dangerous precedent that will allow more of these tools to escape scrutiny.
Automated employment decision tools (AEDTs) are rapidly appearing at every stage of job recruitment: An algorithm decides whether to show you an advertisement for a job. An AI bot skims through your résumé for keywords. A computer game tries to sniff out your personality traits. At the job interview, your emotional responses are evaluated by audio and facial recognition software.
Existing labor law doesn’t directly address this influx of AI hiring tech—and that’s an issue. Vendors claim the machines are more objective than humans, but machine learning systems often end up replicating preexisting forms of bias. When Amazon developed an AI recruitment engine in 2018, it was scrapped after the engine taught itself to favor male candidates. Rights advocates say some new computerized assessments are inherently stacked against people with disabilities. And researchers say tools that focus on an applicant’s speech and body movements are no more than pseudoscience.
The peril of AI isn’t that it’s necessarily more biased than people—but that its decisions are likelier to escape accountability. With any form of discrimination, “these tools can amplify it and make it opaque and put it in a black box and operate it at scale, where it’s further obscured, and moves it from an individual to a vastly different systemic level,” says Daniel Schwarz, a policy researcher at the New York Civil Liberties Union.
Unfortunately, Local Law 144’s scope is too narrow to regulate most of the AI hiring tools that are out there.
The biggest criticisms of Local Law 144 center on how it defines an AEDT. The law, passed by the New York City Council in 2020, initially defined AEDTs as computational processes that “substantially assist or replace” human employment decisions. But in recent weeks, New York’s Department of Consumer and Worker Protection shrank the definition to explicitly cover only tools that outweigh other hiring criteria—in other words, tools that are somehow given more say than human decision-makers in whether a candidate gets a job.
That’s a situation that almost never actually occurs—or at least it’s something no company would admit to, says Matthew Scherer, senior policy counsel with the nonprofit Center for Democracy and Technology. “It makes it too easy for there to be a class-action suit, if you have a process that says, ‘This machine is telling you what to do, and you do what that machine tells you every time.’ Any company would be stupid to set up their process in a way that would be covered by this law.”
Even if a tool were covered by Local Law 144, a company wouldn’t have to do much to comply. An earlier draft of the bill would have required companies to audit their AEDTs for all forms of discrimination under applicable law (though it didn’t explain how). But the final version passed by the city council was “obscenely watered down,” Scherer says: It asks vendors to conduct a simple statistical test for any discrimination based on race, gender, and national origin. In that sense, the law “doesn’t require vendors to do anything beyond what they were already required to do under federal law,” he says.
That’s certainly convenient for companies. The New York Civil Liberties Union’s Schwarz says the version of Local Law 144 that’s about to go into effect is basically a gift to vendors and tech lobbyists. “We’ve seen this happening in California, we’ve seen this happening in New York, where vendors were specifically lobbying weak frameworks to tackle automated employment decision tools, under which they can operate and sell their tools in an easy way without providing the guardrails that are actually needed,” Schwarz says. “It allows AEDTs to have more or less free rein in the city, kind of giving a rubber stamp to their deployment.”
The city’s Department of Consumer and Worker Protection hasn’t addressed the criticisms directly, but agency spokesperson Michael Lanza said in a statement, “This is a first-of-its-kind law which necessitates weighing all input from all stakeholders to ensure fairness and effectiveness.” He added that the agency “is carefully considering all public comments that we received, and we plan to finalize our rule and begin enforcement in the coming months.”
Advocates say the problem isn’t just that Local Law 144 is a missed opportunity. It’s that as a first-of-its-kind regulation, New York City’s AI recruitment rules will set a deeply flawed benchmark for what regulation should look like. “This is not an area where we can just rush through and where something is better than nothing,” Schwarz says. “Because then it’ll be seen as already tackled, and the attention moves elsewhere. And then it gets forgotten—and it will be hard to correct afterward.”
©Fast Company 2023 Mansueto Ventures, LLC. Distributed by Tribune Content Agency, LLC.