But 2024 promises to show that AI is much more than hype. And as that happens, government itself could help set the pace for AI adoption and innovation, and take on the responsibilities that come from that.
“The hype is real,” Eyal Darmon, North American public service generative AI lead for IT consulting firm Accenture, told Government Technology.
He said that the firm’s research has found that more than 50 percent of government activities will be “disrupted” by generative AI, the star of the moment for AI products and services in the gov tech world. Additionally, 40 percent of all working hours could “be impacted” by large language models, or LLMs, such as ChatGPT, Google Bard or AWS Bedrock.
But as AI develops in the public sector, the technology — and business — will take a different path than in other industries, according to Abhi Nemani, senior vice president of product strategy at Euna Solutions, the gov tech vendor once known as GTY Technology.
“Other industries can focus on a single use case, governments cannot,” Nemani said. “State and local governments run nearly every business function, from bookkeeping and service delivery to public safety and community development. AI in the public sector will be complex, not simple or straightforward — which requires a considered, patient approach.”
It’s not only state and local agencies that will shape how gov tech builds and sells artificial intelligence products. The federal government, as the country’s largest purchaser of technology, will drive “robust adoption” of AI and play a large if indirect role in how such tools are deployed, said Beth Noveck, director of the GovLab and the Burnes Center for Social Change at Northeastern University, and the chief innovation officer for New Jersey.
What that role means for the future of artificial intelligence in gov tech remains unclear.
“New and yet-to-be articulated federal requirements for transparency, security and testing may slow the rate of development as companies devote more resources to compliance,” she told Government Technology via email. “While a welcome step, it remains to be seen whether these requirements will make it harder for smaller players, new entrants and academic and civic technologists.”
As buzzy and exciting as AI seems going into 2024, serious concerns about the technology remain. Potential bias is a big worry, as are deepfakes and associated fears that AI’s power could be exploited by criminals and authoritarians.
“The public sector’s main concerns, rightfully so, in a nutshell with AI are: How do we use it without ending up in a completely totalitarian environment having lost all of our civil liberties?” said Julian Cardarelli, CEO of Thentia Cloud, another gov tech firm.
One solution to that?
In Cardarelli’s view, gov tech providers can come together and form working groups and standards. He said Thentia has created an AI committee whose members focus on validating the source of AI training data and related issues centered around ethics and transparency.
And that’s not the only group working on AI issues for the public sector. InnovateUS, housed at Northeastern University, where Noveck works, last year launched an AI tutorial program geared toward public agency professionals.
She said that while those workers have the same fears about AI as does the general public — fears that include
privacy and whether AI will take their jobs — those public officials tend to have relatively little knowledge of AI even as the tech reaches deeper in the mainstream.
“Most of these fears stem from a lack of experience with the tools,” Noveck said. “In a recent workshop with government professionals, almost no one in the room had even tried ChatGPT.”
This article was originally part of a feature looking at the gov tech market in the January/February 2024 issue of Government Technology magazine. Click here to view the full digital edition online.