IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

AI in the Workplace: States Run Hot and Cold on Protections

While states like New York, Illinois and Maryland have forged new legislative roads to regulate AI use in hiring and review processes, more than 20 states have no proposed or enacted AI-related hiring bills.

Artificial Intelligence AI, Internet of Things IoT concept. Business man using smartphone, laptop computer on technology background, 4.0 industrial technology development, remote control, blue tone
As artificial intelligence (AI) becomes increasingly embedded in our daily lives, the regulatory landscape surrounding its use in hiring and workplace processes has garnered significant attention in legislative circles.

How exactly is artificial intelligence being used to hire employees in 2023? Job seekers are likely unaware of the extent to which AI is now integrated into hiring procedures, as highlighted by findings from the Pew Research Center.

Pew reported that 61 percent of Americans are unaware that AI is being used in hiring processes, while 39 percent say they’ve heard a little about the concept.

Yet, it’s a concept that has become relatively common. According to Forbes, studies show that 99 percent of Fortune 500 companies “rely on the aid of talent-sifting software, and 55 percent of human resource leaders in the U.S. use predictive algorithms to support hiring.”

The increased use of the technology has landed it on the radar of states throughout the U.S. and several legislative initiatives have been proposed and enacted in recent years. Those efforts aim to regulate the use of AI in hiring processes, as well as the handling of data derived from automated decision systems (ADS) — which can utilize artificial intelligence to measure performance that is tied to employee promotions, demotions, etc.

Currently, laws that specify that employers must obtain consent before utilizing artificial intelligence in certain stages of the hiring process are limited to Illinois, Maryland and New York City. However, a growing number of states are contemplating similar legislation and some have already proposed legislation that has not been enacted.

New York City established a pioneering standard with the passage of Local Law 144 in December 2021, a bill mandating employers to conduct regular audits of AI-enabled tools utilized in employment decisions. The law includes provisions for mandatory notice procedures and reporting obligations to address potential biases in hiring practices. Under the law, job applicants in NYC must be provided with explicit disclosure regarding the use of automated tools in assessing their applications, including the specific job qualifications and characteristics considered by the tool. Additionally, employers must inform candidates of their right to request an alternative selection process.

Illinois was also an early adopter of legal restrictions involving the use of AI in hiring. The state introduced the Illinois AI Video Interview Act in 2019, which subsequently became effective in 2022 after an amendment was added in 2021. The amended act now imposes specific requirements on employers leveraging AI-enabled assessments during the hiring process. These include obtaining consent from applicants, providing a clear explanation of the information that will be evaluated by the AI tool, honoring requests to destroy video recordings and furnishing a demographic breakdown of the applicants offered an interview.

In Maryland, meanwhile, the 2020 HB 1202 law prohibits an employer from using facial recognition tools during an applicant’s pre-employment interview, unless the applicant agrees.

While progress has been made in these states, others, such as New Jersey, have encountered obstacles in passing comparable legislation.

At the end of last year, New Jersey legislators proposed Bill A4909, aimed at regulating the use of automated tools in hiring decisions. The proposed bill sought to impose restrictions on the sale of automated decision systems, including mandatory bias audits and the requirement for candidates to be notified within 30 days if an ADS was used in their application process. This bill has not passed.

In February of this year, Massachusetts proposed a bill that regulates AI usage concerning workplace surveillance. The Massachusetts Data Privacy Protection Act (MDPPA), or H 83, includes specifications to hinder unwarranted electronic monitoring of employees while on the job. The bill was introduced in the House and Senate but has not progressed.

The state also proposed H 1873: An Act Preventing A Dystopian Work Environment. This bill outlines several requirements for employers that collect work data using AI or machine learning tools. Through the bill, employers must provide notice to their employees and make them aware of why data is being collected through ADS tools. However, the legislation has not passed in the House or Senate.

On the data privacy front, there were bills proposed this year regarding AI usage and consumer data.

Maine’s Data Privacy and Protection Act, HP 1270, introduced in May of this year, aims to govern the use of algorithms. The bill states that entities who are using algorithms from machine learning, AI and natural language processing tools to collect, process or transfer data “may not collect, process or transfer covered data unless the collection, processing or transfer is limited to what is reasonably necessary and proportionate to maintain a specific product or service requested by the individual to whom the data pertains.” They must also complete an impact assessment of any used algorithms and submit them to the attorney general’s office within 30 days. This bill has not passed the Legislature.

In March, Pennsylvania proposed a bill that would make an AI registry mandatory for any businesses operating artificial intelligence systems in the state. The HB 49 bill specifies that the registry should include the “name of the business operating artificial intelligence systems; IP address; type of code the business is utilizing for artificial intelligence and the intent of the software utilized.” However, this bill has not yet passed the House or Senate.

What do American job seekers have to say? Pew Research Center numbers show a notable apprehension from Americans about AI tools integrating into hiring practices. According to its findings, 71 percent of Americans are against AI making final hiring decisions, while 7 percent favor such a scenario and 22 percent remain unsure.

However, the numbers also show a good percentage of U.S. adults believe AI can combat human biases. In the Pew study, 47 percent said AI would do a better job than humans at treating all applicants in the same way, with only 15 percent stating that it would be worse and 14 percent noting it would do about the same job.

Overall, AI use in hiring processes is a complex advancement that states are eager to embrace and regulate. AI can enhance efficiency and assist in screening large volumes of applications, identifying top candidates and improving the overall recruitment experience. However, the technology also deals with multifaceted algorithms that must exist in an ethical framework with legal safeguards.
Ashley Silver is a staff writer for Government Technology. She holds an undergraduate degree in journalism from the University of Montevallo and a graduate degree in public relations from Kent State University. Silver is also a published author with a wide range of experience in editing, communications and public relations.