That is why Ronen is planning to introduce legislation Tuesday at the Board of Supervisors requiring the city's Department of Technology to keep a public list of where and how AI technology is used across the city and county, and the reasons for it.
"This is basically just a transparency bill," Ronen told the Chronicle. "We're not prohibiting any uses" of AI.
The legislation would also require an impact assessment of any AI programs the city is using to determine their potential to displace workers, make biased decisions, create security risks and intrude on privacy, among other concerns.
Ronen, who is termed out after representing District 9 for eight years, said those assessments would include the input of labor and privacy experts. The legislation would also direct the city's Department of Technology to write standards for buying new AI programs.
Ronen acknowledged the city's decentralized IT systems make it difficult to track what technology the city is using, AI or not.
San Francisco's recently appointed CIO, Michael Makstman, told the Chronicle in a recent interview that having more than 50 city departments with their own IT systems and staff that do not report to him was among the biggest challenges he faces in his new role.
"There is no gavel to bang on the table and demand that everybody come to attention" if the legislation passes and he is charged with implementing it, Makstman told the Chronicle in an interview discussing the legislation. "The best tool is to help people understand why we're doing this, what is going to happen and how they will be involved," he said.
He said the bill is an outgrowth of the city's AI working group, which has been studying the technology and helped produce the mayor's guidelines on how city workers should use it.
Makstman said he plans to hire an emerging technologies director to help with the AI-related workload. As far as taking input on the effect of AI programs, Makstman said he is still figuring out what those forums will look like, and whether they will be public hearings.
"It's really important ... for the technologist to hear the labor voice, for the ethicist to hear the privacy voice" about how the technology could impact different people and jobs, Makstman said.
Under the legislation, Makstman would have six months to publish an initial inventory of the AI programs the city is using. He would have a year from when the bill takes effect to write up the AI impact assessment for each program. Any new programs would have to be added to the list with an impact assessment.
Ronen said she wanted to avoid the negative consequences that came from failing to act quickly enough to legislate problems that emerged from social media and ride-sharing companies.
"We're always trying to play catch-up to this technology, and we now need to get ahead of it," she said.
The city is already using AI tools, as Ronen's legislation points out.
Those include the city's 311 mobile app that uses AI to automatically suggest a type of service based on a user's written description or photo of a reported issue. Radiologists in the city's public health department are also using AI-based imaging tools to confirm stroke diagnoses, and to support physicians diagnosing issues on CT scans.
Makstman's department also uses AI as part of its digital security tools to detect and prevent cybersecurity threats.
Ronen said she has heard positive feedback on the proposal from organized labor, including the SEIU 1021 union, which represents 16,000 city workers from janitorial staff to nurses, since it will help them know what AI-related clauses to include when they bargain for contracts with the city.
Union President Theresa Rutherford said she is concerned about AI picking away at workers' job duties or being used to automatically eliminate certain types of people from hiring pools, for example.
She said the legislation would allow her union to ensure that AI is not used 'to remove workers ... from the workplace, and important resources from the community."
She said an accounting of what AI technology the city is using is part of that. She also said weighing in substantively on the impact assessments would be critical. "We know (AI) has beneficial value, but we want to be partners and want to be part of the decision-making process," Rutherford said.
The board passed legislation earlier this year introduced by Supervisor Aaron Peskin banning the use of algorithms by landlords to set rent prices in the city.
San Francisco already has a law passed in 2019 that bans the use of facial recognition technology — which can use AI — by the police. A recent lawsuit alleged the San Francisco Police Department outsourced the use of the technology to other departments to get around the ban.
(c)2024 the San Francisco Chronicle Visit the San Francisco Chronicle at www.sfchronicle.com Distributed by Tribune Content Agency, LLC.