Several AI-related bills introduced by lawmakers this session have sat stagnant in their respective committee assignments.
The majority of these proposals were introduced by Reps. Robert Merski of Beaver County and Chris Pielli of Chester County. The two Democrats co-wrote a resolution that would require the Joint State Government Commission to create a committee to study "the field of artificial intelligence and its impact and potential future impact in Pennsylvania."
"AI can be very beneficial and useful, but it can also be very damaging and destructive, depending on how it's used and what it is used for," Merski said. "And I think that when it comes to consumer protection, it's very important that we regulate artificial intelligence."
Merski and Pielli also introduced bills that would create a registry of companies developing AI software, require a disclosure of any AI-generated content and ban non-consensual videos using AI to manipulate a subject's face to appear as if they're someone else or saying something they didn't say, also known as "deep fakes."
Mehrdad Mahdavi, an assistant professor of computer science and engineering at Penn State University, said without regulation, private companies can use AI to make fully automated decisions on behalf of the company. These choices can range from giving patients medical advice to allowing the technology to drive an automated vehicle.
He said not having restrictions poses a threat because AI is "easy to fool," and where to place responsibility if things go wrong can be tricky to pinpoint.
"Who is in charge of these decisions?" he asked. "Is it the AI developer? Is it the user using the AI system? If something happens, who is in charge? This is where regulation should come into the game."
Mahdavi said lawmakers often don't fully understand AI, which can be the biggest hurdle for drafting legislation. Once they understand how it works, Mahdavi believes they are more likely to properly restrict its development and use.
"Just seeing AI as a black box, [means there's] no way we can overcome all the issues that regulation is going to tackle," Mahdavi said.
With past technological innovations, Mahdavi said there was time for the basic science to be understood before major societal or economic effects were felt. But with AI, "we can no longer afford such a luxury."
Colorado, Illinois and Vermont enacted laws to create task forces or commissions to study AI in 2022, according to the National Conference of State Legislatures. Other AI-related bills were introduced in 17 other states.
Mahdavi said he stresses to lawmakers that they must find the "optimal equilibrium between regulation and innovation" to protect people while allowing advancement in the field.
LANCASTER LAWMAKERS ON AI
State Rep. Brett Miller, R- East Hempfield, is the only Lancaster lawmaker to sign onto any one of the mentioned AI regulating proposals. He was not available for comment.
Still, other local lawmakers expressed concerns about the technology and said they could support restrictions to protect their constituents.
"AI technology is a global game changer, and we must do our part as legislators to enact sensible laws that sufficiently harness its power and prevent abuses that can harm people and our society," said Rep. Keith Greiner, a Republican from Upper Leacock Township.
Republican state Sen. Scott Martin, or Martic Township, shared a similar concern. He described the technology as a "double-edged sword" but said he isn't sure if the state should interfere in regulating AI when the federal government can do it.
On the other side of the aisle, Democratic Reps. Mike Sturla and Izzy Smith-Wade-El, both representing sections of Lancaster city, said AI can be used to serve people, but the line needs to be drawn when it comes to AI being used to replace workers.
"What advancements in technology are supposed to deliver is better outcomes and better lives for all people, particularly working people," Smith-wade-El said.
CASEY WANTS FEDERAL REGULATION
U.S. Senator Bob Casey introduced a bill last month to block employers from solely using automated systems or AI to make hiring or firing decisions. Automation is a broad term for different technologies, including AI, that perform a task without the help of a person. Under the bill, a human supervisor must approve every employment decision made by any automated program.
"If we let novel technologies like artificial intelligence make employment decisions without any guardrails, we are leaving workers at risk," Casey said, in a written response. "No one deserves to be managed or fired without the involvement of a human being."
Casey's bill comes as Congress struggles to address the rapidly expanding field of AI. Last week, a series of three all- Senate closed-door briefings about AI concluded.
In a speech on the Senate floor after the final hearing, Majority Leader Chuck Schumer, D- New York, did not mention any specific policy proposals that could come from the hearings. But he said AI legislation that encourages innovation in a safe way is "imperative for this country."
According to a study conducted by the Society for Human Resource Management, a Virginia-based national human resources association, about a quarter of their estimated 1,600 respondents used automation or AI to support their human resource decisions — including the hiring and firing of employees.
About 69% of respondents used automation or AI to communicate with applicants during the hiring process and 64% used these practices when considering resumes for a job opening. Most of the respondents said they use these programs to increase efficiency, but only 18% said it helped them find better candidates.
In the U.S. House, Rep. Lloyd Smucker said he worries about how quickly AI is spreading and wants to make sure government has a handle on it.
"I look forward to reviewing specific bills regarding AI regulations and AI workforce development as they make their way through Congress," Smucker said.
One Casey aide said Schumer's prioritization of AI legislation should help the Senate fast-track Casey's employment bill.
"At a time when AI and surveillance technologies are becoming increasingly prevalent in the workplace, often with little to no input from workers, we have an obligation to protect workers from their potential misuse," Casey said.
©2023 LNP, Distributed by Tribune Content Agency, LLC.