The agency encouraged California to build a computing center that would allow the state to create and power public AI tools, with the aim of providing better governmental services. The Little Hoover Commission is an independent agency that provides recommendations to the governor and Legislature on issues related the the economy and state operations.
The report expanded on Gov. Gavin Newsom’s September executive order directed agencies to create a plan for incorporating AI into government operations.
California already offers some employees AI training, but the commission recommended requiring a base-level of education around the technology for all state workers. The report said the training could help public employees do their jobs and avoid inaccuracies or bias stemming from the technology.
“Within our lifetime, everyone will be using AI,” Commissioner David Beier said. “We don’t want to create a world divided by technology literate people and people who have not yet been given the opportunity to advance.”
Currently, Beier said generative AI, technology that creates new content such as ChatGPT, is dominated by the private sector. While the technology AI companies have created is a valuable resource, he said, the private sector does not have the same accountability and transparency guardrails associated with government work.
The commission provided a series of recommendations outlining how the state can take advantage of AI’s benefits, while suggesting ways the state can protect against the technology’s suspected risks. The report also recommended California create a council to oversee the use of AI of government operations and make it easier for the state to procure the technology.
Beier said California should make a plan for informing residents how the state is using AI. “If you don’t build that trust … the public won’t understand it and may not be willing to accept it,” Beier said.
In addition to building publicly-owned AI infrastructure, the commission recommended the state extend those resources to universities, nonprofits and startups. Beier likened the effort to the federal government’s National Institutes of Health, which both provides grants to academic institutions and conducts its own research.
While the report argued that professional development around AI would empower public employees, some skeptics argued that the state’s embrace of the technology ignores workers’ concerns.
Brian Justie, a senior research analyst at the University of California, Los Angeles Labor Center, said the report’s recommendation to require training for all state employees was “divorced from reality.” At its worst, the use of AI by the government could potentially lead to job losses, he said.
Additionally, Justie argued, there’s little evidence showing the technology will improve government efficiency.
“Why are we assuming that (AI) has to be here? Or that anyone at scale needs it or wants it, to the extent that the tech companies are saying so?” Justie asked.
The report did not suggest a regulator scheme for the technology, nor did it take a stance on the legislative effort to regulate AI companies.
Earlier this year, the Legislature passed a bill authored by Sen. Scott Wiener, D-San Francisco, that would have required developers of large-scale AI models to create guardrails to prevent “critical harms,” such as power grid or banking failures. Newsom vetoed the bill, arguing it was too restrictive for the emerging technology.
© 2024 The Sacramento Bee. Distributed by Tribune Content Agency, LLC.