At the same time, a growing chorus of people working in and researching AI began to sound the alarm: The technology was evolving faster than anyone anticipated. There was fear that, in the rush to dominate the market, companies might release products before they are safe.
In the spring of 2023, more than 1,000 researchers and industry leaders called for a six-month pause in the development of the most advanced artificial intelligence systems, saying AI labs were racing to deploy “digital minds” that not even their creators could understand, predict or reliably control. The technology presents “profound risks to society and humanity,” they warned. Tech company leaders urged lawmakers to develop regulations to prevent harm.
It was in that environment that state Sen. Scott Wiener, D-San Francisco, began talking to industry experts about developing legislation that would become Senate Bill 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. The bill is an important first step in responsible AI development.
While state lawmakers introduced dozens of bills targeting various AI concerns, including election misinformation and protecting artists’ work, Wiener took a different approach. His bill focuses on trying to prevent catastrophe damage if AI systems are abused.
SB 1047 would require that developers of the most powerful AI models put testing procedures and safeguards in place to prevent the technology from being used to shut down the power grid, enable the development of biological weapons, carry out major cyberattacks or other grave harms. If developers fail to take reasonable care to prevent catastrophic harm, the state attorney general could sue them. The bill would also protect whistleblowers within AI companies and create CalCompute, a public cloud computing cluster that would be available to help startups, researchers and academics develop AI models.
The bill is supported by major AI safety groups, including some of the so-called godfathers of AI who wrote in a letter to Gov. Gavin Newsom contending, “Relative to the scale of risks we are facing, this is a remarkably light-touch piece of legislation.”
But that hasn’t stopped a tidal wave of opposition from tech companies, investors and researchers, who have argued the bill wrongly holds model developers liable for anticipating harm that users might cause. They say that liability would make developers less willing to share their models, which will stifle innovation in California.
Last week, eight members of Congress from California chimed in with a letter to Newsom urging him to veto SB 1047 if it’s passed by the Legislature. The bill, they argued, is premature, with a “misplaced emphasis on hypothetical risks” and lawmakers should instead focus on regulating uses of AI that are causing harm today, such as the use of deepfakes in election ads and revenge porn.
There are plenty of good bills that address immediate and specific misuse of AI. That doesn’t negate the need to anticipate and try to prevent future harms — especially when experts in the field are calling for action. SB 1047 raises familiar questions for the tech sector and lawmakers. When is the right time to regulate an emerging technology? What is the right balance to encourage innovation while protecting the public that has to live with its effects? And can the genie be put back in the bottle after the technology is rolled out?
There are risks to sitting on the sidelines for too long. Today, lawmakers are still playing catch-up on data privacy and attempting to curb harm on social media platforms. This isn’t the first time big tech leaders have publicly professed that they welcome regulation on their products, but then lobbied fiercely to block specific proposals.
Ideally the federal government would lead on AI regulation to avoid a patchwork of state policies. But Congress has proved unable — or unwilling — to regulate big tech. For years, proposed legislation to protect data privacy and reduce online risks to children have stalled out. In the absence of federal action, California, in particular because it’s the home of Silicon Valley, has chosen to lead with first-of-its-kind regulations on net neutrality, data privacy and online safety for children. AI is no different. Indeed, House Republicans have already said they will not support any new AI regulations.
By passing SB 1047, California can pressure the federal government to set standards and regulations that could supersede state regulation and, until that happens, the law could serve as an important backstop.
© 2024 Los Angeles Times. Distributed by Tribune Content Agency, LLC.