IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Members of U.S. Congress Urge Veto of California AI Bill

In a rare move, a group of Democratic members of Congress dipped their toes into state politics, urging Gov. Gavin Newsom to veto the marquee piece of artificial intelligence regulation in California.

California Gov. Gavin Newsom
California Gov. Gavin Newsom
Shutterstock
(TNS) — In an admittedly unusual move, a group of Democratic members of Congress dipped their toes into state politics Thursday, urging Gov. Gavin Newsom to veto what has become the marquee piece of artificial intelligence regulation in California, and perhaps the country.

Reps. Ro Khanna, D-Santa Clara; Zoe Lofgren, D-San Jose; Anna Eshoo, D-Palo Alto; and Scott Peters, D-San Diego, released a letter Thursday that urged the governor to veto a bill by state Sen. Scott Wiener, D-San Francisco, that aims to hold makers of some large AI programs accountable for major harms they might cause.

The bill has not yet passed the Legislature but cleared a key Assembly committee Thursday. Lawmakers must pass the bill by Aug. 31, and Newsom has until Sept. 30, to sign or veto all bills. Newsom generally does not comment or take a position on pending legislation.

"It is somewhat unusual for us, as sitting Members of Congress, to provide views on state legislation," the somewhat unusual letter from the members of the U.S. House Committee on Science, Space, and Technology, wrote. "Based on our experience, we are concerned SB 1047 creates unnecessary risks for California's economy with very little public safety benefit," they said, urging the governor to exercise his veto.

Those arguments appear to echo concerns raised by big tech companies who make open-source AI models that anyone can repurpose for free, like Meta, and venture capitalists like Marc Andreessen, who say they are advocating for "little tech."

In a nutshell, the bill would allow the state attorney general to go after makers of very large AI models if they cause catastrophic harm like loss of life. Many in the tech world, including startup incubator Y Combinator CEO Garry Tan, have argued that would cause big companies to stop making open-source models for fear of the risk of what someone else might do with them and hinder smaller startups from innovating and trying new things.

The bill would cover models that cost $100 million to train and would be substantially more powerful than any AI in existence today, according to Wiener's office.

In response to concerns coming from Tan's corner, Wiener amended the bill so that if a developer spends less than $10 million fine-tuning an AI model, they would not be liable under its provisions, an attempt to exempt smaller developers from drawing the state's legal fire should their program go off the rails and hurt people.

Wiener defended his bill at Y Combinator headquarters late last month with Tan and a host of techies in attendance, stressing his desire to work with the developer community and to avoid unfairly tamping down innovation. Tan told the Chronicle afterward his mind remained unchanged.

In a statement Thursday Wiener said, "The Assembly will vote on a strong AI safety measure that has been revised in response to feedback from AI leaders in industry, academia, and the public sector," adding, "We can advance both innovation and safety; the two are not mutually exclusive."

Notably, Wiener agreed to include a number of amendments to his bill suggested by San Francisco AI model maker Anthropic. Those include doing away with an envisioned new government agency focusing on big AI models, instead making it part of the state's existing Government Operations Agency.

The changes also limit when the state attorney general can sue AI companies that play fast and loose with their safety practices. The amendments narrow but do not eliminate going after companies before their tools have caused damage, allowing the attorney general's office to sue after a harm has occurred or when there is an imminent threat to public safety.

Wiener has said in the past the bill is needed to get ahead of harms instead of waiting for them to happen, as has been the case in some other attempts at tech regulation.

Wiener also agreed that makers of big AI models no longer could face perjury charges if they certified their model as reasonably safe and it were to then run amok. That section of the bill had become ammunition for Tan and others who used it say Wiener's legislation could be used to throw developers in jail.

The bill has also received support from some quarters of the tech community, including former Google AI researcher Geoffrey Hinton and noted AI researcher Yoshua Bengio. Co-founder of Stanford University's Human-Centered AI Lab Fei-Fei Li has come out against the bill in a recent op-ed and onstage at Stanford.

Wiener said the amendments suggested by Anthropic "build on significant changes to SB 1047 I made previously to accommodate the unique needs of the open source community, which is an important source of innovation."

Noting Washington's gridlock over AI regulation, Wiener said that "aside from banning TikTok, Congress has passed no major technology regulation since computers used floppy disks — California must act to get ahead of the foreseeable risks presented by rapidly advancing AI while also fostering innovation."

Lofgren previously wrote to Wiener to voice her concerns about the bill, and Khanna recently released a statement saying he was concerned about the bill's efficacy and its potential to punish developers and stifle innovation.

In their letter, the members of Congress said that methods for understanding and regulating AI were still in their infancy. They criticized the bill for focusing on the most extreme harms imaginable and not paying enough attention to more immediate issues like deepfakes and disinformation.

© 2024 the San Francisco Chronicle. Distributed by Tribune Content Agency, LLC.