In her letter, Breed said while she agreed with the overall intent of the legislation, "more work needs to be done to bring together industry, government, and community stakeholders before moving forward with a legislative solution that doesn't add unnecessary bureaucracy."
Breed's office could not be reached to clarify what bureaucracy that referred to.
"With additional time and collaboration, I am confident that we can find a solution that addresses many of the concerns raised in recent months, while still enabling this emerging field to grow in a safe and sustainable way," Breed wrote, saying she opposed the bill.
In a statement, Wiener said, "I fully support Mayor Breed and her great work to uplift San Francisco, but I respectfully disagree with her opposition to SB1047 — a bill that simply requires tech companies to evaluate their largest and most powerful AI models for risk of catastrophic harm."
"SB1047 is a pro-innovation, pro-safety bill, which is why it's received such broad support," he added. "I look forward to continuing to work with the Mayor on the vast array of issues on which we're deeply aligned, including housing, entertainment, and transportation. The great thing about democracy is that we can agree to disagree on an issue while working together on other issues. That's what we'll do here."
Musk on Monday took a different approach than some critics in the tech industry, writing on X, "This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB1047 AI safety bill."
Musk has repeatedly warned that runaway AI could pose a threat to humanity, and previously called for a six-month pause on development of the technology before starting his own AI company last year.
"For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public," Musk said.
The state bill, which is up for an Assembly floor vote, has divided Democrats at the local and national level, drawing unusually broad attention for a state measure.
Nancy Pelosi released a letter opposing the bill and a group of Democratic members of Congress, including Ro Khanna, D-Santa Clara, has also taken the admittedly unusual step of opposing it in a letter this month.
The bill would require the largest and most expensive AI models under the safety testing to ensure they are not capable of causing catastrophic damage and loss of life, as well as include "kill switches" to shut down a program should it go awry. Opponents have said the provisions are too broad and would discourage developers from building and using AI systems.
The bill has drawn the ire of tech companies and venture capitalists large and small. Meta has opposed it, along with OpenAI and other makers of large AI programs. They argue it will quash innovation in a developing industry key to the local and broader economy.
Meta, which makes the open source Llama family of free AI models widely used by developers, has lobbied against the bill, holding events and sending a letter to Wiener outlining why the company would not support it. The co-founder of Stanford University's Human-Centered AI Lab, Fei-Fei Li, has also come out against the bill in a recent opinion piece and onstage at Stanford.
Not everyone in tech has come out against it. Leading San Francisco AI developer Anthropic has suggested amendments, many of which Wiener agreed to include, and said it could support a version of the bill.
Those changes include removing a provision that could see developers charged with perjury if they failed to fully disclose the safety testing they subject their AI programs to, and doing away with the creation of a new agency to oversee large AI models at the state level, instead folding it into an existing agency.
The bill has also received support from some quarters of the tech community, including former Google AI researcher Geoffrey Hinton and AI researcher Yoshua Bengio, and the Future of Life Institute, which warns against the catastrophic effects of tech.
The Center for AI Safety Action Fund, Economic Security Action California and Encode Justice have all signed on as co-sponsors of the bill.
© 2024 the San Francisco Chronicle. Distributed by Tribune Content Agency, LLC.