IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

California Law Would Safeguard Public Against AI Risks

The proposed bill is the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. It would regulate “development and deployment of advanced AI models,” in part by creating a new regulator, the Frontier Model Division.

Unrecognizable businessman using an artificial intelligence platform for big data analysis. Corporate IT concept for pattern recognition, deep machine learning, detection of statistical outlier, AI.
(TNS) — The advent of artificial intelligence has inspired both hope in its problem-solving potential and fear of its devastating potential for misuse, with some 200 bills across the country proposing guardrails on the powerful emerging technology.

Most consequential of them all may be a California bill that would affect dozens of companies in the state’s cradle of innovation and has drawn the ire of Silicon Valley giants including Google, Facebook parent Meta and startup accelerator Y Combinator.

“The goal here is to get ahead of the risks instead of waiting for the risks to play out, which is what we’ve always done,” said state Sen. Scott Wiener, a San Francisco Democrat and author of SB 1047, the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.”

Legislative analysts noted that while artificial intelligence, or AI, has potential for many benefits — from advances in medicine and climate science to improved wildfire forecasting and clean power development — it also poses significant risks.

There are widespread concerns it could be used to develop autonomous weapons that can select and engage targets without human intervention, with AI-enhanced drones used in terrorist attacks or to transport illegal narcotics.

AI-driven algorithms have also been used in phishing attacks and malware, including the 2017 WannaCry ransomware attack that exploited a Microsoft Windows vulnerability and caused billions of dollars in damage. Automated trading algorithms could lead to market manipulation, like the 2010 Flash Crash that caused the Dow Jones Industrial Average to lose 1,000 points in minutes before recovering.

Wiener’s bill would regulate the “development and deployment of advanced AI models” by creating a new regulatory body, the Frontier Model Division. It also would establish a publicly funded computing cluster program called CalCompute that would focus on the development of large-scale artificial intelligence models, providing operational expertise and user support and fostering “equitable” AI innovation.

“It is a very basic requirement to perform the safety evaluations that these large labs have already committed to perform,” Wiener said.

Opponents of the bill argue that it would stifle innovation and hinder open-source AI development. They said the bill “fundamentally misunderstands” how advanced AI systems are built and places “disproportionate obligations” on model developers that could harm California’s robust tech economy.

Rob Sherman, vice president and chief privacy officer for Meta, said in a letter to Wiener dated June 25 that AI systems “are built by multiple actors with differing functions.” Those include “the model developer who puts a model onto the market; AI deployers who build models into systems to provide a service and control use of the model; and end-users who leverage those systems.”

“The bill’s fundamental flaw is that it fails to take this full ecosystem into account and assign liability accordingly, placing disproportionate obligations on model developers for parts of the ecosystem over which they have no control,” Sherman wrote.

Wiener, who’s accepted numerous amendments in working with both technology companies and AI safety advocates on the bill, disagrees.

“It is a very basic requirement to perform the safety evaluations that these large labs have already committed to perform,” Wiener said. “If a significant risk of catastrophic harm is identified, the developers must take steps to mitigate this risk, making it harder and less likely for these dangers to materialize.”

Wiener added that his bill only applies to AI developers who produce large-scale AI models — typically developers with computing power valued over $100 million — to implement safety and security protocols.

Additionally, the bill may penalize companies for failing to report AI safety incidents, including events that increase the risk of critical harm, models engaging in unauthorized behavior, and failure to prevent the use of AI with hazardous capabilities.

Violators could face civil penalties depending on the severity of the violation. Under the proposal, a court may order the deletion of an AI model and its data if the violation involves death, bodily harm, property damage, theft, or public safety risk. There are also penalties, based on a percentage of the model’s training cost, are higher for repeat offenses.

Only the California Attorney General would be able to file cases for violations, under the proposal.

The budgetary requirement of the bill is expected to be determined when it’s taken up in the appropriations committee next month.

While the bill has drawn powerful opposition, it has backing from AI safety advocacy groups including Open Philanthropy, Encode Justice, the Center for AI Safety, Redwood Research, FAR AI, and the Future Society. The Center for AI Safety sponsored a poll in May by David Binder Research that found strong bipartisan support for such legislation among likely voters.

It passed out of the Senate in May, and after passing out of the Assembly’s Privacy and Consumer Protection and Judiciary committees, heads to Appropriations and a floor vote before it could reach the governor’s desk.

Sunny Gandhi of Encode Justice said that it is important to codify public safety into AI laws now, having learned lessons from widespread disinformation on social media.

“It’s better to get ahead of the curve to stop an AI Chernobyl from happening,” Gandhi said, referring to the 1986 nuclear power plant meltdown in the former Soviet Union. “It is a nascent technology that moves extremely rapidly.”

©2024 MediaNews Group, Inc. Visit at mercurynews.com. Distributed by Tribune Content Agency, LLC.