IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

California Lawmakers Float Bills to Reduce AI Election Risks

With the primaries in the rearview and the general election eight months away, lawmakers have introduced bills focusing on AI's potential to confuse and deceive voters, and otherwise disrupt democracy.

The,White,House,At,Day,,Washington,Dc,,Usa.,Executive,Branch.
Shutterstock
(TNS) — State lawmakers unveiled two more bills Wednesday to the growing pile of legislation aimed at reigning in the potential worst effects of artificial intelligence.

With the primaries in the rearview mirror and the November general election just eight months away, lawmakers have introduced bills focusing on the technology's potential to confuse and deceive voters, and otherwise disrupt democracy.

The legislation was announced at a press conference Wednesday hosted by policy group California Common Cause. The group's California Initiative for Technology and Democracy launched last year to fight the potential for AI generated content to distort political messaging and confuse voters.

Assembly Member Marc Berman, D-Palo Alto, introduced AB2655, which would require large social media sites to label election-related deepfakes. The goal would be to limit the "spread of election-related deceptive deepfakes meant to prevent voters from voting or to deceive them based on fraudulent content," Berman said.

That effort follows a 2019 law Berman authored on deepfakes — videos that use technology to animate a person's face or body and make it appear they said or did things they did not. That law, AB730, prohibits the distribution of deceptive audio or visual material of a political candidate within 60 days of an election.

Meanwhile, state Sen. Steve Padilla, D-Chula Vista, introduced SB1228, which he said would require large online social media sites to verify the identity of users with more than 25,000 followers, or that have shared more than 1,000 pieces of AI-generated content

Accounts with more than 100,000 followers or sharing more than 5,000 pieces of AI-generated content would require verification via a government-issued ID, Padilla said.

In response to a question about the legislation's potential to run afoul of free-speech rights, Padilla said the bill "isn't something that's going to stifle free expression." He added: "It is only going to inform the consumer so that they can make an informed decision."

Assembly Member Gail Pellerin, D-Santa Cruz, also attended the discussion Wednesday in Sacramento. Her bill, AB2839, introduced last month, would ban AI-generated and other digitally manipulated campaign media within 120 days of an election, as well as 60 days afterward.

Pellerin also chairs the assembly elections committee.

There are currently dozens of active bills in the Legislature seeking to regulate AI. Bills are frequently dropped by their authors, fail to make it through committee or are merged with other similar pieces of legislation.

The AI industry has exploded in San Francisco and other parts of the state, and no federal legislation has yet been passed on the topic. The call for regulation has taken on renewed urgency in California. As Berman put it Wednesday, the state must act "because Congress isn't."

President Joe Biden called for a ban on AI-generated voice impersonations during his State of the Union speech earlier this month. That was after an apparently AI-generated voice reportedly cloning Biden's trademark folksy delivery reached out to voters ahead of the New Hampshire primary to discourage them from voting.

Each legislator referenced the episode in their remarks Wednesday.

The Federal Communications Commission has also sought to crack down on phone calls using AI to simulate real human voices. A bill in the state Legislature authored by Assembly Member Jim Patterson, R-Fresno, would rein in some automated calls, including those using AI.

Here are some of the other major areas in which Sacramento is gearing up to regulate AI this session:

Safety

Sen. Scott Wiener, D-San Francisco, introduced perhaps the most overarching bill last month aimed at safety testing large AI programs before they are introduced to the public. That bill would create more state resources for testing AI programs and force companies to disclose their safety protocols to the state's technology department. It would also permit the state to sue under certain circumstances if the technology runs awry, but exempts smaller startups.

Another bill from Sen. Bill Dodd, D-Napa, would require the state government to assess the pros and cons of using AI in its own systems, and require departments to disclose whenever they use the technology in their communications.

Bay Area lawmaker Sen. Josh Becker, D-Menlo Park, is carrying a bill this session that originally intended to require companies to "watermark" their AI-generated content. That bill currently reads more broadly, aiming to create "a mechanism to allow consumers to easily determine whether images, audio, video, or text was created by generative artificial intelligence."

Assembly Member Evan Low, D-Cupertino, has also authored a bill that would require watermarking of deep fakes and other AI-generated content.

Assembly Member Phil Ting, D-San Francisco, is authoring a bill that would ban police from using a match from facial recognition technology as the sole cause for an arrest or warrant. The technology often uses a form of artificial intelligence to search for facial matches in a large database.

Health care and discrimination

Another bill from Becker would require a physician to be involved when artificial intelligence is used for certain decision-making in health care settings. That would include decisions to approve or deny coverage for care.

A bill that Assembly Member Rebecca Bauer-Kahan, D-Orinda, is bringing back from last session places limits on how and when automated decision making tools like artificial intelligence can be used. The aim of the bill is to prevent unfair discrimination by an algorithm in situations such as hiring or property rentals.

That bill would also require companies that use the tools to say they are doing so, and allow people to opt out when possible. Companies would also have to provide periodic reports to the state on the purpose and performance of the software.

A separate Bauer-Kahan bill would require so-called data digesters — companies that use personal information to train artificial intelligence, to register with the state.

© 2024 the San Francisco Chronicle. Distributed by Tribune Content Agency, LLC.