The meeting of the International Network of AI Safety Institutes, a 10-nation consortium working on international norms and safety procedures for emerging AI software, was meant to support what could become a trillion-dollar market while at the same time ensuring AI doesn't hurt people.
"One of the most important actions we can take to advance AI safety and innovation is close collaboration with our global partners," said the U.S. institute's director, Elizabeth Kelly, flanked by American flags and backdropped by a rain-shrouded Golden Gate Bridge. The event included representatives from Australia, Canada, the European Union, France, Japan, Kenya, Singapore, South Korea and the U.K.
Weighing over the collaborative approach and the hundreds of assembled participants from government, industry and academia was uncertainty prompted by the arrival of a second Trump administration. As experts talked cooperation, they considered the risk that new U.S. leadership could see America go it alone and follow whatever rules it sees fit for AI.
President-elect Donald Trump pulled the U.S. out of international accords during his first term, including agreements on climate action and nuclear weapons, and he made it clear during the 2024 campaign that he intends to undo President Joe Biden's executive order on building safe and secure AI.
He has also said repeatedly that his priority is to stay ahead of China in AI development.
That perspective currently enjoys support on both sides of the aisle. A bipartisan Congressional committee has called for funding advanced AI research to ensure that the U.S. continues to outstrip China's progress. And in prerecorded remarks delivered at the conference, Senate Majority Leader Chuck Schumer, D-N.Y., said the U.S. and its allies must work to "ensure the Chinese Communist Party does not write the rules of the road on this critical technology."
The unanswered question is whether the new administration might hamper international cooperation, including a planned AI summit in Paris in February. Indiana Republican Sen. Todd Young suggested in prerecorded remarks at the summit that — for now — the U.S. won't take an isolationist approach.
"Global collaboration is essential," he said. "The challenges and opportunities (AI) presents can't be tackled by one country alone."
Trump adviser Elon Musk, who owns his own AI company, also could persuade the president-to-be to remain engaged with global stakeholders. Musk has repeatedly warned of the potential risk that unchecked AI poses to human life.
The keynote address at Wednesday's event was delivered by U.S. Secretary of Commerce Gina Raimondo, whose department oversees the U.S. AI institute.
"We are home to the greatest AI companies in the world," she said. "That means we have an obligation to lead in the work of AI safety."
Raimondo warned of the risks of the technology, but also its myriad benefits in medicine and education.
"AI in the hands of non-state actors applied to bioterrorism ... gets pretty scary pretty fast," she said. But, "we have a choice; we're the ones developing this technology. Let's not let our ambition blind us and allow us to sleepwalk into our own undoing."
The meeting included talks with AI luminaries like Anthropic CEO Dario Amodei, European Commission AI office director Lucilla Sioli, and former California Supreme Court Justice and Gov. Gavin Newsom AI adviser Mariano-Florentino Cuéllar, among others.
Amodei raised an alarm about the potential for autocratic governments to use AI to build massive misinformation campaigns, and the threat posed by increasingly autonomous AI programs that companies like his are building.
The U.S. institute recently announced it had worked with Amodei's company, as well as OpenAI, to test its latest AI models before release for risks such as the willingness of the software to help supercharge a cyber attack.
"We absolutely have to make testing mandatory," Amodei conceded. "But we have to be really careful about how we do it."
During her own remarks, Raimondo said that "if you can't certify that an AI system is safe, it shouldn't be released."
Raimondo also stressed that the U.S. AI Safety Institute, is "not a regulator," but more akin to a government science lab. "We also don't want to be in the business of stifling innovation," she said.
Newsom vetoed a California bill earlier this year aimed at requiring some testing for the largest AI programs.
To guard against potential harms such as AI-powered cyber attacks, the U.S. AI agency said ahead of the event that it plans to convene the departments of defense, energy, and others to study how AI programs can be safety tested in areas such as cybersecurity and for military uses.
The international consortium also issued a joint statement of purpose on Wednesday promising to "facilitate a common technical understanding of AI safety risks and mitigations," and unveiled testing results from big AI programs like Meta's Llama 3 on its general academic knowledge, certain types of hallucinations and linguistic capabilities.
Governments from the U.S., South Korea, Australia and a host of nonprofits announced $11 million in funding aimed at curbing AI being used to commit fraud, impersonate people and contribute to child sexual abuse material.
France's first-ever AI minister, Clara Chappaz, in San Francisco for the conference, furthered the theme of global cooperation in a briefing on Thursday. The scheduled February meeting in Paris, she said, will include heads of state and will "not only focus on safety, but see safety as a tool to build confidence in this technology so that people can adopt" AI more readily.
Asked about the possibility of the U.S. pulling out of the Paris summit, Chappaz said, "we truly believe that everyone needs to be around the table, and so everyone will be invited."
©2024 the San Francisco Chronicle, Distributed by Tribune Content Agency, LLC.