IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Members of Congress Pushed Back on California’s AI Bill

The bloc of Democratic House members from California, led by Rep. Zoe Lofgren, argued that the bill’s technical solutions were premised on standards that are still in development.

capitol-congress
FlickrCC/dannymac15_1999
(TNS) — When California Gov. Gavin Newsom, a Democrat, vetoed a measure meant to prevent artificial intelligence from aiding bioweapons development or other catastrophic risks, he did so at the urging of an eclectic mix of critics that included members of Congress.

The opposition spanned large tech companies like Google , open-source entities like Mozilla, and eight Democrats in the state’s congressional delegation.

Newsom’s Sept. 29 veto highlights the difficulty in passing AI legislation in the face of competing sets of interests, whether protecting U.S. innovation or the public at large.

The bloc of Democratic House members from California , led by Rep. Zoe Lofgren, argued that the bill’s technical solutions were premised on standards that are still in development.

“The methodologies for understanding and mitigating safety and security risks related to AI technologies are still in their infancy,” the lawmakers wrote in an Aug. 15 letter. The state bill was “skewed toward addressing extreme misuse scenarios and hypothetical existential risks while largely ignoring demonstrable AI risks like misinformation, disinformation, non-consensual deepfakes,” they wrote.

The lawmakers also voiced concern that an overly broad AI measure would create “unnecessary risks for California’s economy with very little public safety benefit.” Newsom said in a veto message that 32 of the world’s top 50 AI companies are based in California.

After Newsom’s veto, Marc Andreesen, tech entrepreneur and the co-founder of the venture capital firm Andreesen Horowitz, thanked the governor on X for “siding with California Dynamism, economic growth, and freedom to compute, over safetyism, doomerism, and decline.”

The measure, which had easily passed in both chambers of the state legislature, was backed by the Center for AI Safety, a nonprofit group. The group had previously gathered signatures from 350 researchers, executives and engineers working on AI systems to warn of an existential risk posed by generative AI models.

Among those who signed were Geoffrey Hinton, a top Google AI scientist who resigned to warn about risks of the technology; Sam Altman, CEO of OpenAI, developer of ChatGPT; and Dario Amodei, the CEO of Anthropic, which says its AI model is designed with safety features.

The California measure would have placed the liability for catastrophic harms on developers of large language models like OpenAI’s ChatGPT and others. Developers of AI models that cost at least $100 million to train would have been required to publicly disclose how they prevent large-scale harms and the conditions under which they would shut down their models.

A state agency called the Board of Frontier Models would have overseen the development of such systems.

Explaining his veto, Newsom wrote to the California Senate that the bill fell short of addressing all the risks.

The bill “does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” Newsom wrote. “Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”

Responding to Newsom, Democratic state Sen. Scott Wiener , the bill’s sponsor, said in a statement that the veto “is a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet.”

While large AI developers have pledged to make their models safe, “the truth is that voluntary commitments from industry are not enforceable and rarely work out well for the public,” Wiener said. “This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policymakers, particularly given Congress’s continuing paralysis around regulating the tech industry in any meaningful way.”

Closing off ‘open source’

The push to regulate development and deployment of generative AI models is part of a large global effort, with the European Union being among the lead when it enacted laws to regulate the technology based on risk. In the U.S., Congress and the White House have spent nearly two years trying to decide on an approach to tackle specific problems such as election-related disinformation, bias and discrimination, as well as broader systemic problems.

In October 2023, President Joe Biden issued an executive order that set in motion efforts at several federal agencies to propose regulations that address bias and other harms caused by artificial intelligence systems.

But with Congress slow to legislate on a variety of tech-related issues, states including California are moving to fill the void.

Mozilla, developer of the Firefox browser and a champion of open-source computing, said the California measure was worded in such a way that “it would likely have had the effect of shutting out players building on open source, whether they be big players like Meta, or academic players, nonprofit players like ourselves,” Mark Surman , president of Mozilla, said in an interview. “In the last 25 years of the internet, open source has been the fuel for small players getting in.”

Meta, the parent company of Facebook, developed a large language AI model called Llama, which, unlike ChatGPT, is open source, meaning software available to the public and developed in a collaborative manner by programmers who can add, modify and distribute it.

Surman said the legislation could lead to an AI developer who performed the proper safety tests being held liable if another party modifies it for a different use that caused harm.

“So that, basically, it makes anybody who creates anything responsible for the impacts on the whole chain of innovation,” Surman said. “It’s as if you’re saying you should never produce the raw materials that might be used to produce something dangerous.”

Mozilla’s opposition doesn’t mean the organization is opposed to regulating AI, Surman said.

“We’re not against there being AI regulation, just the opposite,” he said. “And I think California in particular has a chance to be a leader in what AI regulation could look like and how you balance safety and innovation.”

The members of Congress who wrote to Newsom emphasized the likely impact on open-source software as well. “Not only is it unreasonable to expect developers to completely control what end users do with their products, but it is difficult if impossible to certify certain outcomes without undermining the rights of end users, including their privacy rights,” they wrote. The other signees were Reps. Anna G. Eshoo, Ro Khanna, Scott Peters, Tony Cardenas, Ami Bera, Nanette Barragán and Lou Correa.

While Newsom vetoed this measure, he signed into law a slew of others, said Tony Samp, the head of AI policy at the law firm DLA Piper.

The bill’s veto “received all the attention, but the California legislature did pass and the Governor did sign into law 17 AI-related pieces of legislation addressing important areas of concern for AI like deepfakes, data transparency, and labeling,” Samp said in an email.

“There are several AI bills in Congress at the federal level that resemble the new California laws,” he said, “and we can expect a growing chorus of calls to enact legislation at the federal level to avoid a patchwork of uneven AI regulation across the United States.”

© 2024 CQ-Roll Call, Inc., All Rights Reserved. Visit cqrollcall.com. Distributed by Tribune Content Agency, LLC.